var/home/core/zuul-output/0000755000175000017500000000000015134677442014542 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134705677015510 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000305255015134705522020264 0ustar corecoreRsikubelet.log]o[=r+BrvEZƐȒ!ɦ[M cSy-Hgf1p'Z>S4:KtUgfX]&f;l6?v}zWc-l6"ο}Ƽ>Um׫Y_u[L-V9_p^P.b=[wûxwF۳tS~ QZ~%RY,yDy~z]//:oXx$%X"LADA@@t_˛+Εfy]Wbx=@K8}.s3x~JkwI\1 y5KmN:`Sj{ C2i1Gdģ _$Kٻւ(Ĩ$#TLX h~lys%f6:SFAgf΀QՇ2K݉$ӎN;IXF :͎7sL0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`X_7̝4?qޗh/9Y@$9GAOI}g4XBu!.F"`a"BD) ᗁQZ-D\h]Q!] A̴UX-?0haxC}~xv<.뉘 vKArݝE)~AGGAj^3}wyπ{4훉7[qLFyS \zkQumUi_c [Adt:zG "GP8[a Ⱥw^eH6'Ύ >Kdg?z7| &#)3+민,2s9R>!9*XC~ S[qq7,!yq%a:z<\tunL h%$Ǥ]6f y[W` \roƐ%aޗ' B.-^ mQYd'xP2awEڊL|^ͣrZg7n͐AG%ʷr<>9 2W>h?z| (G>ClsXT(VIx$(J:*~CQpKۗgVKx*lJ3믫|S`<՛=JPBUGߩnX#;4ٛO2{Fݫr~Aw؍:-~|W0: =XVy#hE&q]GC/zE'`9ƭZ.=!@Q,}s=LN YlYd'Z;.['kb8_b|r wFuRI%T멸Ѭza\ߞ/2vw>- MR9z_Z[57x_hn|W/CWOuU%v[_((G yIi@'Pmz8^_`c46A>hPr0ι㦡 q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^r%/bG މG uIo1]ߔr TGGJ\B BR 2X\r=\YGVɂ?6jHSJ Jno#] gv6د>GD}c6  %T%St{+J{=v-}҅"o ']쌕|tOX8nJ*A*%J[T2pI1Je;s__[,Ҩ38_ь ͰM0ImIMiVJ4&jN'Bx)0v߁R[F)RH?uvͪ_5l *7h?aF_VxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mp+ھ*V I{a 0Ҟ>͏ ,ȣw§`Ee$Ə{(he NSfX1982THwnUC9fDx5X@O5ޔL3VQ 7,oT5/tMJ%\t=[ٹ:11:2`c J1bV_gɊ:+^V,~0{gj"A, rXr*0ngY.] <ʜ6 ;,9VPAHuŠ7կhw=m{> *nacԇ&~hb[n㉫k:%݌6od FT'BTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8fq CXReQP2$TbgK !)CG'ct Հ}a>-:(QxPyA Z ULJ- upƜ.4cY\[|Xsɾ7-<S1wg y &SL9qk;_OP> ,դjtah-juHvhd`N|ʣ)-iaE';_j/0xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=mi_)qb㼁{buQ?zT u]68 QeC Hl @R SFZuU&uRz[2(A bP7k:Rǜ%V1Ȁ Z(Q:IZaP,MI6oiA>edCĥ6uOڀI dFF rF,:XXlw{$UYwS1dӧl 5Yp$'mZv"ꒄℬT ٪ȿ%j\WFI#R޸B4vOL-LIP E&4x0<]pK>UKkZ{qqio :íyFR1u)X9 fNU ~5׳batx|ELU:T'TtݭRj^-%[ R'l}jdX*kj1H`z8F5]We߷}J0TTƩ0RxSe=>/ ђ(9Uq EmFjq1bX]DןR24d 3[n )ܗKj/jUSsȕD $([LH%xa1yrOP! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -]iSCQ&s~In/SZ % 'I Ƿ$#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21;d9OU᧯MR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv]kSH+./4RNnlmQ#id KZ] ?#q+Aak===}p\!*HT $;Jb2<zD0hF,$r1ܕk懪7CՅG$0T(ET<#ʢL*qwWДc i UQ*FrEHZ'0XŨy4eUoΛ;]YYq{\~*ϣB6Id Sp!_3?X5Hb<xTCUed!vf<-uXHCUfƃZhE8LMͱ@. H]UbxP"/q uqe9cx]\zl,޳6VFk0 Pȵ\/옇7fڀ1o }c0xRcȊVe=uŮޫ*&4_/WwxdA1jR Gq<}wǓnxME&W{v& /3WwL H'gđ6R)C\yxMzp( kׇ4(҅CFdh3BS]kg 0tGq2[b5]kj-x.7i!6ɹoBf{?MB7/LBw/tva9cK7|쵫ͣ:zЂ«,(oc4 {7+tMs=)B _uX}#iu8~d9O|%*t_^ro1ؒPh! ˏ kyV^L&T}GD2&u#O󦖽XjčзX^bR_ 2mGlF,mfI֣L*z]zw ),C,PkEG>1b\o96B |"uӲlǍ o8ܱ@x84o ,.S? @DQ\Z~wDEJbk2CN$ E#.ፒ:>/ĵeIXw#_S>iښ5MBYT (D"QSћI5*xX娚M #2z.F]r5 (c|U-uF2M뺨Nnդe5c[Ru^yHu2r%^񳥃bCJujQS{XBT> fh`6}GџIKȗ &PB>Kwކ?ZxfhjZkUvr v Jɂ¹zBaK+1xwg ŊgQ_ZBxt< ylOuc}c_GƳiR9dp X%n<2ȠK#啈yZ+9J;oU_s=wγtg],8j Fy,QCM˴R5i|9wfV$,KrM" VWl^Wۅ,mN`owu9n*oPOϺb;I* @Ӽ<];b/&G{wcS,Fu|%Lʕ(!ؤSS:Iut.oZb); >Oxs6I vD&6Gì'|_M1x9)hrݝso9XPC#9u)td\JT5"kNayA.\-:9"!!XSh,,*֩ӆ&/[٩kQQ\L4)g9.@ytz^_n`EV@I@goџBD$IKB&kw 1G a!]}ΛZEK Hqª%82$K: +Fv#z'IkGJY]}pFp\Կû\ (ke<2p- 0 S'T=⟺]{7z~<eOj䒂>'O3J㛔\&O1Ga`ᨩfW)A.hS b]X #m8IPuGX HSMwM+TTX:v'?r1+/Seo)ih0i;Z(]ʀNnÿW `^_Y  ww[aGi6#ϥ|V!}{뇣oGƣ=},; QۜRKJDZU<4S 4"o* Q܌XiW^B<" &kS7SXw Rdڕy~yx󴢉44٠ !1-5RVÃlyE#n9!~k0%((lkw4MvE(t$OK"`Ry80MY/&$AqR5~cG :d>}SU6JJ@i˄FWўaZtx>0 $RoC[ _&` Vi _E m4Z&ۆVmRS+Q-^lK,qT\V>aq`ڏu" (yRwe:C-*/on/S4@ր܂_y ][ןKóHmKnɁ'aGy D@_?g@̭h_!ȥ'h-( Y!/$٬W,>Hh0pTsJ'y$cͰ]GMxўi9<Ȯ*ZEnOYm(nȰDqռ -Us D"hf3Pڜ=e~cX[j ~.=$|vLs+> Al-<ؐ5@O/9j\MYXq{jI6p:L` GyևD޳跏Ektm#'g%kkIc5mwaNyrm[-0ܔBKUI\;93Z۲vPyg74ʪR\'D^n8|}, *ȣʑnZi-ey)R0N't8牄O̓-^XUEbu eK^lK~^[NҰD0=U -.?Ч6S]`&65Hz*1jʭsU%M6Z>K"xMgWd}k+P'VahUu54C_KwumFUY[>b &G}"ʳ x*fM'ӵa̢]Yu4{ָ}#.%#jgI*MQ\WW{*LcY%yZ[Dl)얼syäђ*O Mvm*1\Qe"4VE0>s M5Hb]oR1,mf'ooI-<ދkq]3ukK=tC9yڬ{{"LW@Ah*E{ dc[o}QT/`YJX@ JkjLB@PLEW=e)Vb:Yz1*J^wqdl[U8Qۇt-UhPnQ10 ots4z%6eK#f)z?-b&YZJQ ˲Rh!ggh>;D v==<\D/Fvwjڪ[op,n= Aj4Ž,¸7@փϷ4r܅FEk0(w1'=M=EmK!39k!&͞Y<(Pd *xReҀ6hM]_\,Dy[=,yӹi m? [/|8Ӫ=M,3AI,IgD;=˔g$+kHPCrE{չv)nz*S,u^i>DSUxt?ʘW0-G$LZk-}uUK 1S>z\[6n?24lBEMnO(>o"&|?P`Qom>+؛{:ߗp-I?6E{qec4=d@:g.D {Uo7֊G_+ܗlq"ܧ%"'=խ⏠|E[^B0@2tg?zr z0}{zp8!wOLJ,>C$t-AG r3^aa]OEt|ǚ>VGSbZ{X^rPo*}b]sO:#O߳~ۋ^acrYLuQ#;_8@tsJ l?z"B'? @#!t)B,|NĂg {^i.Eh")KTetzrL@Mmq#(Boھz~!z:ƭ`WNgӥ 7{ z<+ ?vޱurݽ_{`\ 6CŐØ'W}q"E,^;/D{ `qݾ똻U  80?va"q#cVޚM{~_=@^c2?:/䄠4-N]sڶG!z ,><Ķzb,N Tƪep Bq:+V9gM(G8w_V0ݛqc0$DhbwoVA)Os.&plfn-DԦp>Y^.B }5wZ`4ma |qߒ,΋7h KM ۴R؛/.# l )"8^B8--QFl:؝ߤ w_WNzyr[]èٮ#.@ڷ~8(KzeM=d 't$"R ЎK$lPώ&kG2ʆݱ:>7a[Ù0d8l M4QY%K#xܥ0C%/aglXz/aY]B/K쵞  wrs8JhۏUom]dRKtcI,c;iY~CrL l,N:RQ7Žiڊ*i (@J.HM xP4`^~/M:B:l BʞH(ۆ;]ʶSNeUne[ݎݪgҵ'w6ש[˷?TDJuB #TIt&.2/r:F?Qr..D9GpO=BJ1D I.EI< 5s[J*}I@nEϴm5ݡM|L3kx9g_0͒SY?%\HH˄|H<|:&5D{2IEA=o셊._^U˫p~\Si`hErB1h^ Ȏ<}^?8'~IE0Ө| C-ASGu;4D)I~*Y!"VF#?EZz$_|G-8BH,X &};x7p ߓ$խ i%gX*?-xBiyԞikV?}:8FٸaUTTtlHmY}ُ-7.V9,&[w} H0OM7$\ND,cʊŇ+nBlQP{5dX 㸙T** k ?$eyrfpp$sTu5få RZ~_Qh ͎h3зA =FB`φlP<.A]?lYLJ)L-C_R+O"z{"/ykAj٧!eLDYв/ ƓQiІWUX$ UƢbg\ g]J"GsVAj :vwzSEy4&tzfD(PU';Dު5I%T{ 3/w0!xY*yցjxvhiKwB!cСmqK-e'"ۜ ve^yl6tb-UG,6_-"H2*fϗc`9jp|,G1\*SNj+[ 8UϨ_IL*Gs-;Qlu|Ac<})<46 o!d$:T\H7p`I9ʋi r~J_juZ Zov϶}ϊq$z)w޶yK962JJFj}>uo A|dc5p/Fw{N4L,34?򮥷$IAs9iOb|JlQ$[~#KX#vّ=E_lːBU3_\ԽV0v2*</1f fԜ~7l;TL.l9f?5};SЙ}?} yM6Mm)j{;%1oԷߨW?()Aht^2;'{|=q SzN,[{ܿlxv gD0dFnbg6} ÷j'go>V;t7wiP<f)ô%@}ts!pw#nY*WX ,D /*(méNJ1Y%dJ~pzJޘDJ:".4F {|ۥ R?6 1jV+`-Y47TU%c `$}$jj#ZbeX\A΂)mTÂ崈n s#D0I|3"Ě|<^XpFw1,N5 J]bmT\e=D6 KUtfLƅY^j$,Z+TPY01f$H(P.G9?ޒ(>38ր[ "4\/ZB2VT#i/c˞WI-#fPU\m! F0;#;8}FSd`&LLEȓp'#tܑ [[7 M$@k)712˝Q81bX,>Tشۆvwdh:n%`dv,CL,y9(Q4"FFjF7}k@-,z ^>&\Hct@8}O!`9}uH͒ e T!"3_)p _>p*t*=1>Ӈ ǸWk)Rxʱ[81_RԚ c33.%xWYvAC0Q= 1Q8%;$V, /$X &bAVYO2%!̊_ nl)sU${v ke,6 Q" HbVohN< F(oLHpg5l#K8Ggm==埏kb6s3Lt nb?NjJڂ-#_ҙ}ymkIpD*KۑN ՎԹכv L'AЏumueFɴ[8ʀ3c#<$£,R$)/Dtç !Lޑ>w& I@k^A{y iC0peOX3IB'\5yu_=ri̖(}h%e#鼜_on)ptbl5eUȜը6JpLK:o4>" j0vMrJi9 umbCa?;Va=ۯIBƪEI9fLTL\rֳtRQ-%H: 4CèDE,FB7T}`W!w BXn78TRU͕s,̶)H:&Ο7dQ؝A}8ٴWnx.V tIj;3)H#wr7 X UdB1VE-EH:/{G_]"ӽR0bκfp81Ԟj.93ҺEUf +B8'[e+QCX&8%Y` 8A`D&0yM8rvjo":A1ǭQ3Xb$O?k5woKYW=|`m)ksf8P%1Ă)zԬyBD ۙod^?0ޞnmCh{L"áJ$CR"  (K,ИH .EH:E8uid2P%kRrmp<%iud*$be0g{>j$c`ХOj}ڭn:rw t^,Xtz{Sre3ˋɕ5g/ :ATq6&Q`/j)VR@AR]Lї"=v|_]4b r4j!Gmi AjւiB6EHtXgS/k0i* R L@Ja$vNRFF11AXe՞:'y2:}Йi5ӵ6xj0MA Vk%\nXް9T$6=.u@~XB°Qgef0ch$~~w [!fթӃ)8/%M $oՃI.5G9եVy-y6l˒:/KONG9dˇ!c˿8:uan`&#q!yX>K Z$(v' riиIb@L[bB*s4Ԇ4It!\Pn0\L" ,SPBZpDC?c$ݝ[ЅՖ:: *T\ӾnF5+FJSxլW`WlZ$^Ëkxn]I127530i:LhAVeƈEp7[" (&ϓqL.z ]z!m-]F}I_ٸ8xH0rY\i'"j+)<Q F*q@I`UAokw$[tu 1IKv .=`Xe,ƙ'1CdCv $]z%qpv8LY#Kx%qpfg2EV7\p:~p}&QuS*FY5J/bz.7VA/&~{l!|m$8^|wx(>΁K^MƉEҳ+}ٿE;c:8gdoiIMy'XTd>(Й+âWf]4BZ̆`(縵GsTKތrT NR#KV cCdCۺk[g`~fU"q0i`Q\01!09_0 8؉ca!` "hYm0%@}tEBICc}}d*5[8AYXiL8ﯾ6LI⹷jJU t_m>gd+5 t'3D0yrf?Y//$Y 5sȽ*8AE5e..?Zά~xHtrQ+fEe3>|rKeyQ|%  8RJ' \n27xgy .آjXKNE͔̪Z0|n9TI78a{g)kgH!O;H3|& VyN5idZ&W1]51 ~eRbue맛YFVڰ']ZNKJ"_֔Ĭ8ˀ{Ҕ[kkw\1F{6 ja2p|ޛm&q0LF G2yR҉ƖU*djTiierR.0lv[Hpj{oJmi!pȝd3I6M26gF ,^d*1Qc?`>ui K6?6A[a̲|Y 'hݽ8:%&7$ޓZc)\kRyX.zG`f j1Pr 4K>݄! Il]oGW6ݳ~?A`e !N?\(RˇbC2I _# ,_ݵ!qt^lꔡq|􂘚o^傘A1a Q1*ه.7??Es ӃVk0MKx`Ɠa҅GP)A?y?]߬] T3POuV 45JGOZ Bsm沬[~M\]J@U2E/2˓ž%oφ q8ܠwyuHQ{>\$X'$Cc,} =?A ī=KG  ̌0T#Y 2~*|Up`q|.݆w?$J/7f%<=J$-8)O~[o,#>9B\@]qFmrp`xhғ;EshH4d*gFn޹oFݓ@/@I]M_NRwLO_bz -Y?/ݬPE-%+|?4S9tsb1@Na$` d) 4\t0"8u}_?S+1-x- Y(,H轢UrQ'`aϼ"iC8 .M? c)2.Wב%rh%sXxG욫O(;T]7W ֍H"#XCqf4ngi8sS]C=s3ĬP \qȝbƌN;'ۓN2Fo Gjpuq?[0ʇ_,7WLzinmevU\rزH8U[85|Qe4…8TQצmv x)H`#NVF _+G n?JޛG`~<ŃV𥉉meCk]6/!N!s?q2koM'caZg<_ !V>VlDpEYV=,-0o%4iN;DDQ&*4]@w0zޠ50Zd vYgMEAkO-D QDpTo76@*9iHER;2Q#ux(ѪU"Xu:Q{B1WjF(¢E@k y\7Lɧ`|& Jh^0?x&z0T2;bl[7(_Pa-o¨QQI78tߥd|ѨYm N; Y/J1]Enp} M,27!H&)c0{|_́iTSr{T٧N/bXv5R^҉ FJO^S^aу\EER OsrU1B6wk`= ytBji`˜VJiwjA԰jp0?Fhe㯇sdHAF=iR;l3]`&̅o3FMh;M$Cg |ktm\naFw-!%ZmeNjD$Ey!.' *YR*Y"bbSza *d,#JZNMKJ\K[OXaO1-p]e~S< ([eN <AwrctH@=M-ًb= aL =Wf1U9D lQb:ړ;*ِi40@::x54~p>2/Fy6+i4+.fPRÕ1˦8Xu :4"ؒF^bn_gηK_(Lmɗ*p*JK;@5:8e%K]+T1+}qe@!h:0<8טB _gW^"61@tzutɹ~~|{(+›Bt񨗻p둝 <~V ň˅ޢ4kX22V*L̷QxJcdrDp7Ayp,nXLUv-Y~˦kb鸾 >+_nJ :}XuXwŊu+[I-[s*zl H6*ZpX V^VܴFVܴ[H-t[0 {+n-39HRKk3"V,XL@ Z˕-iE,iO e,[] !i^C#<@Н( K͑!H.4-aE,aOL1sK?W-:w8#mG "MLL$qnH˗˱8ujDCpJ 7/ 3u]7SDÖCpWBӶo}%)\%ZOՒ Z;X]ѓi?dߞ?wYy.ÎgKYt/]D6R렟ql95a0ǨH uDjL-x ;)LM5i>cbu8Mm8PtU*wŒrtX|{Y*x93c7?MHʏerȐfNp/P`P?͵0QPDtiȨtL2(fhj~# EhRJnp'ב3glgޙ"vL1,cJ2!&A)#H&ې3\1: ~)M &[Cpݭry2Mt\ȅhQk\Ō 5kfY`**,5iFǠ#L`u"B M!dN j"HAy&,)mPp r-9oEiJw`X xAG qG  WnΦnZR|1kd&=ӃtTjT(ӄ)P9oO놁 xM*_MΜKRV KmШ)* ;'xB=YN) *8^Ǜy0o7|Ms~?|ʔRVQTD2ӬAk(hW+&eMH-tbۏφ9"(@IΗk.ӖqIkK5No kO .?RktPqX=w90 ;7"iº͏f\mϺ?,6ʷwzcOC+z|1hX/ç}(bf?"aH۠mh 0ςIBi/'0}|z p+FP%keg1K7Z~*&8zYm) {uї+hsR/3(Ljdq8L|+S7R!nSZ>.˯978Ng[ւy"!iw ]BI`GwniweC< ͭ2b,I%TZ6Tz Ѫ2kĽǍaQƴ'nLRږQ2&Ҡ2#j pFiQR4ъD5Ws)t~saVA~('pP"u8pbf(F"NkԸ@*\4e{*V((QL ah21&Hx"`FH4udHVA[P{0xn@)bS1H`L)?C֩âB-gF,V ЈXJQi&"D#!W)?D7H3@1F PɹF[wFHfVBjp BZ*e W-S\6&=~J"Ei {n$4ZC#pt@\ ÔK')l~e#ZNJd#HcĚpd۔8 뉻kH !x&ܶ4~IA7fn;;N=ݮYEԂQ4όRd kc{"5|cC4D*ŜH;P 0C Y5V!|\>V8J!<{{$=koF"6H4c/I ɡL)([IQޠkÙ~̜i#A@ܦk\Z fMhO4$ . 5>C A-b<IJxd2hs6)Gq8C*#H@lh9EhЌa ű&td+ckDI,ߙbt}S n;EKW.N_hLz:`ĔQ%#4DBmCPAf4e!MJeQIE4dJLHh{WZVnZ AC Є3\gi` OtWBd0O..oqg ({¨_|R]ɬ1^az .)'9`)1cJSmjB!E}0ɾBV"{]ہK{ܭ4PI(YTChx.9 &c&\g5[*i_dORkۀ0ڲXk2m\%PpW8U"PR}ՊTs:t΢+nqȱ1BE *@q$IUH65}dY6q[=.͓ezvHu(=0!2Vہxxa謤zF@U% 4q6Ҡ0"Zcn2関}ͬ7yO1E0Xځ+xv9RqORua2.ņ9I1Κ40_J0c2&bH%rYo0!- R! ű|o4ZLmHX ϫ7^~I⅝ydEW4(7VW=ݰ&:Koyh{SҜ/;t>WU%=kh~FO~|S0nuiC[d\񢠚Zx+F?^M[ }vKP;e4U=9 ZӪ{}iXL4JA832(EoTi}^9 SNС h HV`r"l=۾VR8mXH!>=`= ҂b F+tb=Z୥wZJk=."7-bڰjBc>hvΜ6`r}{KUgWQ/؁]:k]ەb󕌯ЛƝZHcޗz;a,r!'l gX0F݋| 5~VE|sG)َ_~1 ffr7`VbjpPNߘ|wrJ.5ĂXAJ>Q6cs j+E:rKٕQlG eP`]vd'7͊hLnj"^yr} -fK<.VM+OjhWk|[_ͫtLgEłWZ:ϒ/_T(+v[Fa2qF{ 2KP!~ȼo]y uh 1)ό6Q0&;qb򝳇6d$a1gERb\MlDz;.+w݋2 -f hfN@ZM|kEG1Lb.3XR Mn T&)2e^uvg"b٨XPؙiD sW$䗀Ox4s]U/4uKE/T춊x ^Y>ʟM; BECE {ʘ<>2l~ha "ıxP(d}]pZJ!<{$ o5p{i ֧{bn7r'. b #Qo]Z.1QVrTn/mjzhbe1Ƈ>|X~O擨hӺvݏgܥiT ūu*5},#>|[Z8[2$sC^z. o{_lpƸk⮗Ec@W_&r?<֒wkϪd?$B׽B6挂YgP?#"7y=sB F?,YLEELE8r) SDcII ] 110'($6 |w:; kوSllr9`YۖbVisfқ,'}`E+r9[0)f5G T+]{0[2 Tls*F`t}Wed*MTe\A,tKBUR)uPDJjp\w~2pw׹uc4:1R#sςJ b8 E7Q%ZM- |lܸ# WwqCK2^둡qy}| < ,B_xA&ֈ۬yG%?]ؤ\v, r֚8J/ՙ5AbJ𼹶aɵjV1/XҬ!X.h- zUFc_f:faq0 /gWAem4LSF0~[_-c,I5kּ)Mw+""lEWX D6DŽX##9U٪g[WJdR5jM6- ZReJ>GwW-[Wm2ޒ7h F\Gw*֤]kxjcaK0XaiMʏ5ǚ5D}#I^9X0+BX`[eAz9a~Y lM bM6ZK6NFͻV. LӢ4~րnh8!ɑ*G>OIKMPyJhHLFS"ZJMIU1T9|1q.O5£̐5@~,0cPuq_nst *~(cB^o17a̍"i腕Ehntv>}y"Fhǟl{Y$Gl̠ ɔ"BP>K dDG1p>"Ǵ[LI^&B5ğ 76E+,6^$W9P(*!n_Bޛcg|\-FoDF17"Fg?[ި o7t7z~}yooC}ǟ-oT咉 Ġ4JYoVd&lA>dIFB&P3/:d}BX3՜kH!LNr#`ERDB.QAR". I95>x'>=H$ [L$H1v }/__"BKc{ɘ*.Ӿa'U\0. I95y" fO\QFρCUg*őU)/&(τ1d\6#]8VΆQ"a9 v|YIatOO F^ѠiT2roFԧ7׿Q1ƷQr_YEL8\\%; B19CȨ2d)#{ЁFTX>{u-i}l^v};XqG6CdnmhF'%lڋIF_GQ3Un(4 oHt.RD%.n6D'mu^WbG>WR7uz1^[w!lyS&؀! Lgf:O!<:.|vmI鬹an"U8%kŋ ;C>Sہ7EIrH-X@)I\5z\)CN2.drrڈ@ݎ\%SWs3ϙEJɈ79wC>O}u7`x;Ar|6}{ƃ3ɛ F"KqD*jզ۹rr38g#w8L \zWѥ"_hRmL5k/*BͼcayNWoAp0>w׾!EKJ X| bPsU;ݿ}-iG-,Վ D5lDX|ypҙtLLxH8ωhSҴ 1& 9wo LCt󄼭ph73ٚHdp%;Z|1窼Òr˰*+1(W7DL՛Wj:6߁:&9`)cB@DZL$D̙ ^qn 'zϺaDgTiA ^~2g" |2@\T<oynh{QsU^ߔ$UoGI,fg]s`\\V Nlr )U-`B n#@±㪰IwY# w:{"W]Z8gޯF<<\w"dEjGlon~F,0n&02#2™6 aS $sʅ= rQ_F56ܨ.ig _0PHWU׎jJ7jI`^,-KPO$#2a|ʕ 98_sqVZ88mt(򹪶ula$ "#$%KAZM>ӇNÐE>S53<>EN@(po=1o?56yZm?ᅵ\4N̯?,1 ߾'BfEUuCOi :*(JA.Ix4$&8D.i)4%aLr ypFUnEp)쬐)k͒90Q!cD妁?={!qὊ$+!Bq ϗ '1TPsU.xDÚ< EB㇒Z.<:%ZDޡr+ؗs+nLXah8s(iE!C=2XzM@R[K #2/i,K+&6M)-Y{!tcJ}*9aCE>Sͦ: 㞬WRP@m*vنdh|[Ӂ57q 49/^>߹jVv5 SrAZ\0nӄ+̬ ږ"J$\3 w]Dp8Tsp8Ԧ18\kޤL{|hŹJ4GLMV3$\a60RDSyϛ/μftqjR;gp825=YF.2h~w(<š!׆(V>׉B \u6 PbC0R 1YGVCL=7Ok^ڮ5 wÓ,jwh -f8wWܣpaQ>jSxjv#ww7W- '4}Fd4sPs/67sP "rBKGB`ަ%л(|RhjJ&jEU.7IXrC2 seܻ;)۪s`r8B8K!_ٞdsT!rqgwcJVVQ+JU,0`hZZd?~Noy8gӡ'pjB<}} cXBTEK9a+ 0"aǩ{ ?:}j[sM"c;<<tsh/ NV_2+"f1jh7|r].f;GB"tvN xIyeR29+[|+V&}  SuK_8 S`vr&,oCGGd4 5-/EwJʜ`%C}#qxq>yM5hB*,n.%3(w予~#p#|+o(Qٮ_ e3R,gӷtViSҴE>}ջ+|5]`m,Fh~DF*W<~3eu Ϟx/UgSGMȤ% I”.Y'qc8"r;wQ ㍡]m6+Fn]d  (Vm{,{f[lK%) @cbYYUWh\z)B\йu;sFkS0a Evj}E<],4!]ZR wl̊˭WkýL珍dBS ݗ j_5yVLr}fLSNr1ix@d&A#&Ln%`ԉ+?\c:J&8Y  AbOm2twV!($e3bSV2/?O B1%%}?;f1lT+iJ̤RF 1I2`i%F{2]?ס/؏:q:?F7(zx|րy2`yWwTxqO ,Z4%*#IJI" q¯N,`}ߛG~;8@.YṿCDgqEiY84tAVwWK'X1]S B)wW4(qw片YWaK6JkU)}{Pa'QG'Biʨ^BYU\> JFyܣĨg)HP\JXxoJD$h·* Z(5+uK'}Gr=u?-Hŀc-`x4\O{V/Od]ϝ;{'|ͫ֏XS27^t׿FJC}G4*4jq9ԓycyD6T^CWũc91^x?+2RQZUeE_Y!+q !J9į_1/M%S&^A5q9TYB{+:,z0=Ͳ#Z♖c^F< RQ]7z^t:F?zDs!ЀKP̩1i3Q0m)a)ۍp[c{iN iƌ4pp3j.ΒDžG/N9"P/㲋#̝$݈T4|E6ev EҺq9T|{RqUE B陫"ey8$nў%.jwZ{AŖӚ/s:FK\#/ߖI )ܝϟ5fTNGѥ]V@53U&[32}X柿Z+M"7?+ [~?O?΍WQQgvjw`: 'kHl8%yⶔx]ĝ4Ev'Ea}?®c }s6 k06Bc)( ib9tʼnD)Cib4(D'2Y?O5[OgM6B]S>LKhj(o| 2ߔǓx ;Zk16pVĎSW8ތ^dT;+^t's;m?/&PNB_cd2i [s&nhmf&s/V8Xq/d~XQ.g\~!9ț%q2/VO.Eln|JewY7Tm]; QskUw$Yh=KU,]PS& =U-F2h z<-s \Tnrc7:e5ěOȔo""nC/z`,.ʯp8Ge:mq2>E|8AȜBU&BxѳߜciiV/U*=#%(F*Q^vMQ,>V,fQ":5뛛+QU/8O:b\;N{e#$|2|rcK|{u gK=Z6~Շ)b}V_.ىMsP}gx/6ͷgfeU U}T:`. @S'>>%W2ߍ6ܾzl݀騎މ~a2iM7u  1nA'oh:VQ=z :Ign^(w)0baD;}pnhH/ [f{ܦ[‡᱆E^sDŽ/KZI%Av&iJF7zEBI38qpV0KR* Pv"4IZJPf^6aF_#"Cl$CeL'f~{^B᷆Ea4cIÈSAQMbvJOh^!ƪ4PAi$#O҉j(Olj ,`腴KX1|װ`퟊IȟMw[-5R 5(bDR$ixNw% x\]H?H{ȗb8E3D+)*S&`!8S\n;R:{ҊzeG"5v$}VB@j(8ő)?׎Z1z bJc޺/Ek P 1ZFi !3/qC%ƥ*$AB58 $̈Pa  a]~-!m{;N6:=C~)yKQ%K1#C_^ϗ̰_6D 3q/>C =ap/f5R(UUqi>$5(Ck2 1KROc/;Pm@A`M1rxּ9'xfRJarm,dhYP.O8x0m}b> C Whyf,`2R*q8gm R_퇘j˭*\Zff7UDq6Ǜ ²܅rRʹb ux(\e&6^1~s/!OlBQMڹm,nō!moQ1/ d[+'_cUzDy?e04߳{nyan`Jqlbbƶ]Jb;APdެoTQ1>`'9?x'J:9*_Zpd.B7i[GY;0g` 5 e(ԙVa 3bk^zAk `Y[40i nv*-{ f;13:V}sK뜅w}RqDaggMi y_1=;AoqteVja]P5VQP֬@KZs-~'8O[H˹la ǚɨ-OHߞʋBݶfcjj麧1WvבC.BҢRI)N.a|LP蠜>̕-2{wƵ<|7v{9n&jPSKnWVoF=wcT}f>~n+;Ezr׍|g<+Gib*<9 4Y RH@iDh*"F1O3iLIWF^1Ľw]4JKءEAyB+P5w41%(h$Z4,EsU:4+Taʴ-3CHЈԫ~MQB 1zC J"Qg'.+OMH  z-IF|bMBk0XDi (ac<*&%{q٧hM={ JcQb癊4C#ZS{=!Уk ٻ[i (ʶqM[\t rfϼoF1(&P;d>7õ{.0->+xi#m`6[bTy[kXHQ.AMȩJJ C }+Qa0X* >;DL0 $Ӯqsq w=?5* ƪi ( Ge0ZPBٖ< TsDՇw|_MBACE@*_1<υ-q&ȟ{0w!$3rQ1*$ rXf0 Q[n;Օ/?wBb~KC_?Jemٜț?fRz,jQ[_sҫ.9ry4jӪ;1uኖ]SNXuk}6rFQՠrÔ>1aOl0JP}?˫J cFi:߿-{7z}tOprO#>|SBmJwq*imk&2/~2-JW*Uy>&TnnylO)h䱾^$6 הNlX5XdlvdYĈͽ5xxkmlAF0+f+3x )Xem&G)S?nuOc 9'W{|MweEv|TMxfE SxJ9P q4 Ĉx[kM?Gk1j 3 iF{U=Q|)%TkF3a̔! º@+ތŒTj_ ہ#xW Q 2LRڗl/bWS; 1Vi !v$ QAHi`T`RLQh%iF^T 5!b,4Aԧ* /SS{/=AW/ Γ/.7Ņ"L\<>ʏwI1B-(YA@.h"G0ybC6">O|fb*tz9} ՀJW;^}&\T]AПq;5WzmK_o JاJX$V%g=vNe:+˜,vzI닔Yq-QI.zE"EiE],j0H<WDyR)6 g] 2>>ZBDNk却PAӡϪ]f<1Lk/NDIb5?<t73S tϿ9(pnLdЌ>{MUcŴ}{t?ͪdzSe w +4jT!n#V>W&Iy86 /.hI."&QZ̑=EFqbmQ|ofL˴p.wꂕ1@2{&b&RmW;Q 9N9Ē MЭC7wxApF,쫂noʠ" hQ#& &uY Jc9KD MEcK(e#)F+6"]L(UO/j}1pXC4aJZ-^!4L4xMtt7w\0Y9KI)&%ZPHKΐSQnЄM͜|˳)e#MXѶÉS)=Tr|N[Q}|_9|rEt""1*P\"20Stm0@c[`c(b,SsFa&㸆}ڄ\ y T%dPD_h?+.OE"K141F[%5{inFJ^gD%2g|9ݪbko%.!BlcŅfX NӤ]ʈusvY1kO(@y8P1pn0"o V]Vl]V XԶUtC^~➫n`-H3ki|ƣA)Y:rm.(mظѲ>LЭ_4$-%S]QsKyZl5 ?8k7ɿ-ا<ջ*n,,[}OE/L&/\ͱ4rLQ_U}GJh"gdA$46\o0anaQ0c3+_+8&q ?<3,€f#m Հ۾g 7Xz 70 ݒ]'~f0 /[0VϸPSvӻN vxkObp:G־݋⥓7I'Fc`$ַ;zO#7FUyd( }bhp8%bʪ|!M%a8zj1o߬",.>ԛzR? ~-w`Jw7v7U ʼnzns6]y sZ QXd%(lbgQ"3!L MCL&ڧ:$`EiIli񪬁eVkw!ϸEƚ%&ySpf8YzJ',Suc 9^/K_zBΗ )XFB4R#&$*̖ҡ[kNC/(2ߊI[etGh0Pљ a*3DkazJ(D?j^(^w5.=*11("2Ĵd bL# z6StgNgNN|P\iqIϜT1@g"+hBTZRX 02%Sagȇ f6e?:vҭ vAVKFE"e/z-.v\{Sw(2^V*id?^($0\@2G}|~gK%ƅW(!g]}A+-RF53rgiðBٓ]XA'g&utSa=٠}]:CzS*"Bi;0Fi)#]sR.<)y`(*VB K{E$BqxA+P0 +NI˴{DO *M0vb/fSd#_ *ُA3du # g6n؄\D&SD:-T:uB m|E{u~eR ;. !0ԁsv4Mgr4} íl^PO٩`?62"d*c2s$r^rB#Th=ҁb)6{R(`M7''d i-R%~leS_n.KȌ CzA-%A29;J%KʾO>UMZTC1UMtۓ6K6EHʌ)}ѼnHC }}1u*#1Fَ|w:qAUrq!XʶecaR8AJP'y9aAD̈́`V SXVxd} ۈc-I8W҂ xUJL}۞m Oi,9&K,9:%XĤsA=mfm@cNE`(V;EB`d.ºdZ S=l¸7#"`c<%|<O)„ >`9^)穈b*3֞%XO6Sw>-Dl$6XnbKI3I)0Gɗr5kMS<_fKVBY}?O O|__Q~b 2!N^_"!GꆷX"SFS-R/;IS"hN$ܫο?VGh/)LIq5fsRفW9ۧ?n#w>tdWņfN :YK,pϬM kkkpٻʭ/u<j:?wF^IwCy;Z4{qI("ĘpX2]AS^&fط 9:ܲTu%GpL*2&J6 UWT%ȫZg/7o%NQ8MEɹ$tɰ(Y)[R$JHv|i=uh:}R[_gKg{<݌[7h vsmZ*k[1v= ƌ{0愅h X"t魓P'eZN>cÇ`XLt煔5|ً޼{0r!`y3"Xo%I 6AF}]/@NjrLl~6P,`j[*t @:/>Z[sa>W(@fvc/7W;Pr>71p<|QLiq`hEY*#4 Rܲ5?GLe`frQI8+zLl2_8%\/ Z7\ gh՞{,1g% NVz8Y69+ʲl`F9%Al=>~-ˌߣGߟK5ښ`$M >֮b{7M\x}i}M_$ ѧd`ڸLcʟ>-V9ܤ;nrT %wZЁKT*u4bP()GL,EȖPF늄93`1ϏP4MU]ӗ_7y)bW&\9dNs3'VK^f4̎fzLmM۲#^U$mW8 g4t&v4S@[[i?|F[ᦺ(#wa%_dt^ ,d[BN @#y' JCދ^"a3_'Qe5p{rA 6ETbq(Tt)i)`Z!c 3Z͔#0kh 3 4>8 \Gr, I{b<ݲ.B?>}+%@mȑ&yj A."'ǃ&abk*;5B^k?BMQX%r J4!'hb|BOu8FbQ:1n>SK9;+(e=GࠥjavQ).h%PH(Cr%Ķ\pbG-S2ljCΑ{4iLDUTM!fD&HI1Z8+!CZ?1RHSӱ6>1>\y"C$@Å) ݭ kså/ر'~y0CߎGٓBµ1D'weI>N>0v3;v{^"OEdI`FjJbugIEw/a[xdT}EddЪq0p8Z>5^Wh5(ڔlbR [#=ydI7}!ʞAogT>E5^9-(*<Uk`F)&A,B1h9!zd42*^TYӁfPye81`_[1 v vCNl볁#!euMA=\k S'1)!a%c;ėd+18qltGw)7N"3B#24}4cQ\3ʵ041"Rzkzm}d9`Ox9aw^M:/03hX,ZZ VQIa޾3mMS! E1mXU] c*Z![abJs&ᘡ-uO+!!}LC38vo#E|:>_-IkMHu<\.H"V4OBZs[D%1!B%!fŦJ0#U_9(vitg6a6>Qͮy(\N,d|P  LepeJ0VFcp4M˧Dʌd(8={ZE\aQP6ExRp0:1ďl9 BX9>6@j|yk7[L-<פQ9!"#"c).׭'@g8Rkx9n=WCR!bwnm&i::|[TLt{o'F)nl7$(T'1]wQdo+D91Y°vGL9z a]`bGC/nħ Ez# (  NC lYq ,szÑ4 Q 49 TGFcpX\ LIᵘ)=X gZju6*pW* VWhlw؍.ͰټZp(H}sy4)V8yvFÒ^&F?(tu?@HLQvA@v'0V%#18T:ҼfIQgϊiŬJC*c6cr`3 ,HA i?AW%1SsUUW$\ZH\ZQ>L" {a0YB.Objg:PCXU~^]{NPOw.=2^YP$7-[̴&[>9 ,#-8 W_4x)|eÛ氐+'b-r&{d4m_l&NY)(;zȎ S!_OfyÕNddFHu#j@M VzۡĦ(:rƐ}.5QElp6NIJ8FͰOn\Q _Fc|h8w1]>,RcE)Tm.L:O'~2$|Y->1>ǿwv@(xټŷ67[E9cIQ>Z6'P=n8_ďnJ- Ei(c׿.2R?O}:0#LB9a# 7O"؉j'QKP#[ṛ0E:eb]0Sl h N:Wq "GFcp*Э#[nyn+?kwjauʙ1޼dK+Y#  .kWC&Kšp4L)t|_NN\CaQ<.Q7`hh ֧;UOrCYҘ*(J,0wL(j{LJIPq[=hzVEq_CKľ՞ҷ7wn:0*c9.14l2[,J?u;[.À==2#leE)To uJTR[GFcp:I0JE ſ$qs+( Sh4BWiwVu8\QjYp;`QhʺHF'U츬xv.:3fkg}{(0?ٱה9B#/:8i"ί&Z幨hJF2)k:~ >L#~RW+ 8^[Sx|Ƹ %h NboP͗Xh5ye)l_E `qkrF=Wh ң߲qo㾯*_Rdn| zd4:N1zc@(;e H0QRug09N# ?/^-XH=2#MFM5ۆux*18e2/j+kh wbP&:)&֒f\EcGY@nPd#a=L=ZA Yʅ<I*y .[r{2KV Po*r#'EA}]Lc!l-r CH$44#(e@iBhDBN{{0 ,bp)\@zRh N&M(jm:j$$|H*nZPh+GF[p(iWbE{d4S+?f0IJ:h ATсq wf~y"WGFcpdzXx[giWrճN6"5)!rS :X] RSTQi]#`[k9[\ =%}=2S M"MNj|E4s"M8I|,)5&[&YR{)1A.C@HqKuY-2%zd4B|qݪqe!kפ@o$uVJ\ !S$i+185GƸc壥/Fo Hvy;>~ۍjrTk#Ep܈/o!CNTY{ӯWq.] f6}mXxZ hXRuGFcp*젼RQθ o',]7y?16<ĐsL΍:_GC'Yn o]|G ؆n~qp\w'C|yk^5x wK m̑Kq1M]^vz'_'ZeIXΡD#E8$2Jpur+s3m[XZs&#Ls|oOuq/ys:~jNfqHyf0Ncz7?I~Sn]u?/l~a(8(Dϻis?*-Xx.&0f͟)n@=w.^L99\<\Ex`W``(»zY.Gy6ǿ\6nffyv(k3ާ|;de^ǹ~|:/)ͅ1?gFJL) ~0N/%/|v:/b}ivïc@FfݤnYmΣx#\( tuKB; Fqv {zt~}O*w7GU2T00b TQ<~tbJEhrZ$ ~F36n~TƝ| \(!p$Pl PBqڴIM뼑&0#k$'stK?Qڕޛm>_~LJM:v(jѱߝ6+w_b/#œgYUܾ=aO#\w4T **vC0WØ\cU \( (i2!FY W/q9wqoht!& Dw=ֺ"|v9?#4@fqIz)f4JIRREo ;cBXgh)%AGYAK%y>oY,ŕ"6FB)+3jc"0*q ʐ ǞrǍ6k< )H#~A}慧}ufeG`ZMiuc4<::|~Cg;_]<ܚ)0iOU,Jl-"M`9C`ޅ k( ?܍ow:wS$&JF T*URp˓6\x Q"%y(.Wo↮,_(]y>Y%,Tsb>戠 N9`):H@̛RbX(-$qϳ},TtK))<`59$0.c {WƑd͐}+{A$zB䒔b%g"ő8:Mlΰ:~U]] ? fDI%ӎkt9o(pӋM/΀s0Wd=d$[SvV]pX>xbvV}p\8U/a?kKrmkl8~33\, _3֞nnMX>0 V0bje)Wnu>mn+#Eth<:@>M\jC6YW]٨bNu|7I)39\K;c7N}\. nËDx +5n7(XTHvBT ը@rY:}Jy8C0WC~4go޾=J#M_ZۙT[J$k1YPx3ʦf˂8tp vr R+*w$D-ҚЇЖ`gJZ̢f`|Kd<% 7"] pPeEB&L ;-b[$_Di*qJdUnD," AsoFđ"/. mZO4Ȳl>CchI.!žad 1X P3@̊> t}wwS'Au)miJ[w֝թC:*Gz҇x2y|sS:SE5+Kۘ:h/ط5 % Q s R|~ȱOU+Zq|r%\Јĺ X3((dYpM=]' tv ,k70杽B(kB.0f:AR鄡o{_*͋k3vܾlVkM|WO .45v wPWs4.@$OUܕdyrcݴyO|];,pwV085-#G&UnS (:}4NEE*Kp(I?Lqҙ7_ZPP~>gI/<(_l*WRyc F?*oWY/6jjԛYPf27V{ҿl|ٮI/P~zLjlt34R 8 LJ) |0f PHh4˝)>Mf7K7pDRSfPc~:Qm(O+㭉s[ w' 'Kveg]6׾a0h7X6>6* mT:nl>=,jVK" :bnU0i oěPV{ zٯmYC߶]Ou B4i%!"N<'Riܒ8ߛI+&Zv%ﯤJ R9D }-':wMԽ¤[/0B>u ӤtV#I(~tO1ߦ+YV2NN^Yѵ@dc?EoKP.N֎78e˯f˸?Y\-)e83l2`? sVT˵Ԉ`򝧭Bo:t}W*Ջ!ѷG)ޮN'w&RoR xמWo9.5&$9:@g X$K(>>J pyz)Uwb:PcaDP[QfuJ3:bni*P\vU`ZQ4hV# ئ$3ˌsuO2pv'I2)HҏUR tkH3lJ m*JJV)т ؘ5)l%yR3L$[ِN19H1r87n<>YGh-b9+=:5sL .=(rJZ36BLcKȈ&*HC*$(2g !=\re7*<~*7™Xqc9ƎcBk<'> E<*N LW-<@,pR #?b$Yl@!)@$(X afHP5ft֒y>)m-@ցEa| `$jm~G)#5•Gz'·5x!)|C:nFŞ,ZCcD:FSg JM9ࠫ6Hv<"0= /sSaH[fE.쁭 aRRXpc&AB3✕^{2cr7_=hB➸r^L'eI77+yKH_pulzkaҏW^#ݫT@*©K0 T3,f:e8%}YJϑ eÿ#5Ŧc64 <)"H{j=m99>]ah=t")Ԇ9 ށm6RbN`K/\jf4 (9G"v+ۿCgƟme/+r;ΦOhF5°rPevUDcѯT؀#zQ}kZ zCER[!ŷeuv@m&g GQe~׶PԚ[~A_5'|o;VMѮ@MigH1Fr[ڭWbXM ˀXD|AU3y0rr6ȹMttdXt-qGkZ )$pF3N[sg7{R o@hg\ hBfz0C ?Kmk޽_a"ZZv~;fS>/֪H(h"ɽA;ԷS5wT/:a=vp IUp;oL`*ƌƁZ1b"iZ+I?v)S @\[:_R]QRS8 a,jW |\[k[sUeŬ8.8>3޵q#?>Fu9DvH|JάgV:A{=f[;\dYn&,MJ'{: ߍl3s☦^?͖ɋMޯ'zq|ry8c^Du^T"et,}:4*n͖5GhT!Oll2]~܅kd//=go_/7/vKea}C`ߎG ࣮鏟е5Zѵ5^M~MM2K=  /߼v,Bɭ~?*^WVy+8lMj_J"ی*^-6zd?`2%ط3L8|j;彏 rIekr'%ؔ)QJ`2N|v/g.aўM]= &*ĵʆIw&9xs0I+E-d >HNeܫxT̃t| @§n?ODu.ku*O@I [11? IGGU>E`,e tp.f͕:Ec(DXHpr P(p[NwI]-?ŌGrUiwnU ?e44 DGb$:' ejejYZŪejYZ;z^zo\R[9LR*$Zܮ6o۝ǝj,f!P>{ QV)M9+WFP>`,f m2L^W pA2ɫ(pe+8券swm{SQ1\beiA0ki9,p7RyYbVpp<'d.wIEcxzWO=."1֪ud&B1+EPB4*oZrhvBs 91s<$-*jII4IPgEe96х[>.`%S)4-bfF:/ )V% Y!M4FDÌ٣h&5Ec)dp|9P`B"Bd O)d#,D=kP gOj~ҵ3XQ) j=2@w,t4#YZGmo:i"DSUǽ$f|?Fc̴t* Ĩ,Jɲw"u,YIi8B")0G :lMmRzq_Tde\Q j-h`*fA I2PQe,'rܚ_Q<6`"8ҡnI:Poe'~+ +^y z QBO -䎂M@Zjn-Ey)y;7_! %e "1O U@rxQrKH[|qruW,D,(Mz#XJ$` `PYqd g١NW_ݍCWhyivns+1y_>Q V[ƵUlNR>تCV}Ul[m/F-Z]m[V}Ul:Z#3ߣ[mgɒ Y dE=lƕ)nd39VӶFX e%ʍ2$D`Ϧ}̝h垽CO~]2\giZ}G@KB=fp P[~.u%઄e"(V 7~nav.-~ UV4~sma_>|GR4%m1=^ mǾnE͋D~o륟onvoôVAO?Ut:?ެ-rx=RN''nѽG ?I,YW{T2*qJgy1.~wYvqrRɍ* bo%]Oϒ}MzՐ)s^(aȓ"%. AGd1._k4k̲<'!NuIxH2IpŒ*[. ~D 5WV3,q <9|]lك#PQOЌ? ԃm7ǍLp/Jՠ!܏RV),/HlF!# <Ƙr4QGޏhZ^rMa_k*!2-^4Gr߳\DM~Bu7W4(7 {+Xon5XmykB)K%I#&C"9Q)Fďho-QR+E(]6eKR- L<F9qKi 8THȈs?ăd6>&R:tke g%eB eEXpZF ؒ61D 9 cZAk*ϩL&2GՂ$HD0RB QkopFe>IxXH,$u: ?u*Vx)c&J1us6"sr?B OR+QxnI(QpU"~+0Չ/eӸ$zCϞOXU&e2J6$#ZB.#. q)όb=yfB HM8'\@C<:k,aΏi ɑit,$]Fm$נKb(cďhhm1Y[J8Ϯ"D)&BIL7 ģ G%NH A|H8"@~L 5gviUQSV@u["Q3!ʏi B`, SP!F~- ^Q|ٔ)ӀB :O|)7|6l DLʩG5JсIjjrM<>aS:j_x.]XAc)*X!P-TsY5E}pPBvU.qg0g>m"6!,ɴ]rtjO`|*@LܧU)ff<'FB NlU{ap;G{m>`JAh^eFCBTA)$Id}2Rj9<j]b}@gښۺ_Q<@¥t^'3٪xjg\.\mU$J%'ί!%Q[ '6%@ӗ/hZ}J`.Gʳ}ٯ5aw50jFN/% PEoIˬJ;^io]QEFCе8e+)NzHn7˓9ۋ!&1Dh EP_9F1>u k0ʼnI*j0>jǺ&[4.-Ċxd^ d:Yl)g\ ;?8ǦB mVd#BV{١ΐḧ&(kdCzKE~d:HU"ke,w &%-WPJO~҅.m"*vե-},ޟ퟾gtR5;VsR6a6IkY-f(.c^Nώ6nm'W+5m57Fo?<iUH:kRֶr#29˶Uې~^Iᗏ_7g·x|X{#rjJ*VKA:RZl2]_5w 0&"8ᾐt|WAгy3)ŕ#^ ėa}P;`1ۣin.]~mSyAFHq 1jI΃꒬+6."lk|ٰrz@ģacD,('S"r\Pm"~]5y Zt +7_o3Bb\ƪ[@<PrڰdzOu'>:pHF@IJUm(mb;o'mjMA ՕN>mFB<'EeԮ ۭ >76# ^KSɋs3T(9t:P o1B{7M <V~nQZ-&՚@Im`mFAq7a҃VgeSJ JɱIbăRVq ((V9h3߰`zo;t ȼQ QvP-t 6#t!6RfT.% VNZ$j+vo3BugԊ %)J0 Nۮ5Ƅ>2-n3B=ގMڄ``ׄuusPbJUOUm =G|"L5~eIf 5f?_0xǯRYw ([f. <@? mD[`6bCm$$hT)l0dFU`4X`۰~隦bޞiF vmϷe^(ïxjn9Ǭ=I4îk 2I+}I%㫯&#Z)M S#c!]wz{L5UlHe:_ASf|;uw4+ISs&`<)*OfΝ{.e/C$,UűObیЅxtc`Zr%ذᝯVYnil3B+5g B%(@&OZیЃxm7$i /"VlQ.VیЅx^Psp$5xȉcJPV!M񨕶_,emse=neo}[8AT*IIR*MN*L\tig3_Nת/kfz6+c}j~弁^H=+3pk.|^-eQ[4H<[125!n.|Elr&Rfh5E(_7u?nJeBU%QVV),]L@:ZNǤO;`nQiEmW<80)A 0ccpjɷzIL2C-9}tϟR v5;z\M۟]5.c S2Bk1&bI1 s Ix^7D:8n)qYq9if ejCգ l"JyX3I9$8132k+bž$z늰n)-;^Ne0$kd[ @!9 ѲB;Qi-ӴUH,PR ")6A&rBkZX_a& Z; 24 ݧD5{_ď&;yZY#F2/5Wjwl;.E@~%@hknqWD8;+;Y/M ZC۷{k#(Sf|;}t/i-z;+6;'Pnl5ܮES9Q;l /@F۝kd}ۈZIe=3Fr>OY_?CIz,/WEr|Zn7r?§*U(wά6E~)väD+vyyrI xwzx 4z"{ݴU<7H3j&Nʯgl'>('.aTMs;[BuX|7^[vq.Jhgv4$!.nA)ʹ:g:\7^? iV Kq.I?lwY=sp.ΪݳnuV^:o9r.#mbףԆfŰsTwSjh$_7e9cP`7Ծ<aIo3/$yͼ7ܥIta'2~5?S+tjӥdxg+lSnٞbZ_2̺XU:̓ A%bzd=`PA 2qeutM|^FW=*g$]b1Rqul)h'y\($}hù{g\~|MlLUETk-f5Ek❓U~ߛI.I'O<&I| nU~Qجʕ.7+ 78uJGTABmE,Dt`mFQ->GG;u66Q]{}?-msOe=XȽ5O@COV.%E{dzF}+=*=~W>9R/?potT쑶H[H[H[ް%:be#m=@H[H[H[wA*%jGzGzGz#Գi3Zh?ZjǺ&'i\5[lҋLg9붜-E"w0*TαiBU*Y&^sv3d,3ɨ0Y4"RQoP*XdM %[4 bZ2( ՀZݠV:=~𓮅͟jR5;Vsj=ve@H$ʯ lF,f 8(',$Y"m̴ ƨ* j }Ҫt֤E$nce[*HdmH[YGzTWlgỨ,*&_3)Ȥ\UR@}0jpWM8BttHG7t/$Ul~AW]/%bys(@D+;9/H&TlIb}<[O5v(H)Ղ)* `D4:޵u$ٿBU/bwf ;~:BdICN=MRO"],^8U>Uq>bbT!yֵr0xɜ}b,eۂ o,TJ!GKW* A< @E0Y)6Z2r(:f&~d<}Z+>qa;r ]-N?6^9wv>~?-#eJro !0QGh3:gz b&x,ƻAo/wf4dbG!#{r V/[(|;7|z1Zͱ۞9MB,mN6IčEqVW={5 Pl<>^>f;7@ZAZZ|Gg]ΧxyIOO3@L `E3 JS&3٬2 ]%zj<ݰCE5cRoUhۤYIetF)9%gFF=FUgv糃>_{W?W=C[KjT=w G_6:3:3:34 CuF_guF_guF_gI3:3K/I}w/LƧU\y2EX)D'eT}#TǚƞyrXL ®$*ȺjюWRbֹIdU Ƭ8鋝;8q'{k_unMxpmڑL_}Kw];Жq#6,B~B>= };HxkZb-U<kSP9FBJ2]fI1E洝xؚuƾA%+s  D(%#Iu?x18`O ٹYQm[7P#ݳ?JPz/ыU#0A\3TѵsWLU&Ԫ՝W^;NC/ςN1Sl%'WUZ%DXtnBM>{v.jgQȎlP\P _ mƋ8{C+rҴQje% @ X6YZ-}k.4AJtI5#OyW,?ՉbKjL k%kk0R7B*͹&#%p8'E:N&O66FD$k)r\&]sx jH؆Wl|OfSNٵb`αږF\A("VMl%](Q<ؔO>>MCj Y|:`oFxdZ=2my + \nHM^6T"$֫ӇoL3lHW&FSVRlʠ/H3hjɳKf)1G|cRylo\TхT-KAPѿ&3LDM%@)m8fFv'.O?[~a;r ]NV;Bs1:z?}>DN?f:D?-#eJ2 ɨ#Ws]sk6ڊT1xq1#Z>1 ϚO Z?t]Z iW?_7.G}w(S 'Q_q~Ϗ..껳BMQgfD·†uֺi}VlY???x%Q-scXL~1M1}N6[~>6 q=?ev㤤xHdV AWeos՛uM Lz 4{{@igPi21½|6z\o?/^*WXΏ pPw xϾ,u=5 _%Է h啨[0@bқpOn|YU fp5F Ot Ů߻t(JO7Awwc8+zO2O˛I[U>:"coGb5Nd{~;hwˤ?S=wUdwkDq/֣YP˕V8-QEj.:"s;z?`ZvT8i͏R-Ho?),#w#Ī{G/LP#o>Gf2Um=htm bY[iGm,3ꭾnl=NjH;y.lLi[J?^V{uRWuV*fK[q*hl%dtTHtg]Lҍz\h_YꘙU,"CŔ -ͩbc~л~l7!k}cV)5 %|:ZJJbt" (5ͭR I)-s,WwO;t5crIrR $e`Xj!&Z uCP1cf`f M:%KVՇSʺlhuݸCA9$RJn6Lb!6Qv(SjX*x˿ϐ)W,4p,yrh.5YaIںzxtpB #D3 nN]\\/Ύc1kʆ1Җvp!AdHX:['粸X ;Eݪv&J̱6gS$G]ڪqY 'ׂNޙGoatsil ZCw%L>{G&sĂ.i홣 fNa`*,I[ͳ9U\:cTS*Ʈ,7FL.9 -R6*3nB F \.%醸ahR!^&qjvMXoSXq E)ym(*T :]- X+vIImԀ,݈ͪHTZ PtdKJ|XSX&%BK/KX : aZ' 3T[ᗹ 8BHw CPWg-a)-%Æ P4eh@WwAIt`T!)]}ws 0O[՛LaJT-XT!PlP`&Ծg7XFvmH[ =ϋ蹆$ctf@57]T?W c)Hxk<\ C=s4*l'lg50LlGP.D&Ks̈b]Dv kz y#XCdд~ζ'+TR|FLF) J8 NTɬ3+yh)R6N =UveBst]fYS9oiNam*V:e HjRzŬI@pb`1F,:;9iD>!̐Oݝ3QUawW&F,F{ڞ1TDI>V%=ֵmoUus49aV6uyX\(Lna |OABQE^HP",C>h6y2Ze U옡yWCs4dslY`UAzP̨P%BimyN::):.namTgP'1!;E!ܠ0̹[uCϢb-CA2֥gZI;n",3,TLE֨p R \ʹgZ2+{#),W039dE">o3&5J X]ҹd]CZ4ft|ܔ_ML;ϦesML,ԵEnB$0z`2+S6{OQRuZtkHhS=.p6X`4vƸ@I@/vp|P | wQ"D,޴RcF4G>  -ܦI%\ EeBw )0HHP#' @vY߰Q&úzc4BĿz9BV bʅs"ٙ)طN: ;b]E 9Gd'Zл5uC oeP?yڰ(RVGϊ90`ci ڍ F5h*F (uͤX LGjJҸh 'kSX#9sPͭp^)QC]xYk8}i3QWqZ#ӆ DPX?̌pHᖕH-t0EWQׅX~)#<ߙ#Kx>h)k2#> 8e-C3FR [0x }@pi0[XfP 9$bR竈00K1!:Fff%Ф1.| ]R,9tP/kH56!ץfkT]]w*;c70&o10@5aW^i;ō٢ h\A4^QJfM9B?}&WZ(D9ij*.c$j%ͮ{ FgAJDLbD&"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b\&@ax@W͙@UZ ) #&Kd9@rIL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&a}\Z՝gi+ތ^RꮼW_@S)ytFθOq>uK./q%n w^sD ]Uq:wUZR|ܕs Bz^+e]O\#]OZ]KJ-a+}w]=tL#rWsq4+ͱ*QKtW;- ]U:wq ⮪R"wrܕtktu &uSΦU Pe)"ג˓ӷufe-L?]j̫#[TB^PM)Xj߯ʟdg~'{ލ/bl~e/FCi w[xY/u;Ѿlۿzpp:ǻc]݄MdryWϣ$@Y߲FJoеm*?9!I͎o;|K?Ek"yu\_W+qy />jdooQZ 8P Iikw ׼$SU)pMmTb:u!\hkҍArQ6)훟/ #_܀OevsO7;)F\,7j$DםnvsT]I;Ť >|=N i PERT?տ O7 !UOnm}\#xQbI`+.523zW0"D- (7KA?.{ZQ~{wGoz\nz^ '58;`Z^4:m\⼌&b4kGBRKN˗/?jd N;^߆=Yşqi*F; WIӻCPl45v*ZcmߨJr/o|y!dmro |7Xķ}5P|NmoJqN/B]j}[@ezu`q{HCɗ٢I$zϡ7%no߷0hÍp՛}a/ea:ܺ6ѭVoC)ʗGk sz(z{(b0(E(lyG о;1 CLl4xW xT|WiޮC ci|SԱ;?gX2%֓nA詿Z)cW6gW {k-x<΍icŸ7TX!hhj4zS;*zzEO}UPvrWiu,J׍7ء*͗ #rWU`]UqѸ+HUb] wer~Zmk͏hFJYWuu-WADdL3&'!][3zsY=Af#n$?;[ni[6=bW Jܡ*_<]-Ɠ/nG0qf! 3,POQˢNQGiy6FFmLojh~[(oCc =_濌/. ڒnퟩI,Xk^d, ܌^[Qm(FhKFS3Hi9f4T^Mjc=Bfo7J9Wk^t:XԽMzlLZ u q7(,61DѰGXTRZapE2Z!M %v6ֆ7shŢ|+}> 6D@A3AU>;NR?ur0f2ɭ 0HӴ."MaN,M!a}_l1|mtI$@@dٍXsvA:u6G(eˌܳ ӝĝ))ZΜ5 qn0qtirUɣl2N9v:VdҌAX_̷&/oX?QKWY6[a;Y *pƥ4d=k/y1 +6nbP[a&&P el|yӧ.&P Sne5 &ݶn)-k-4Zi=F]$0:%pr1"m,9≻,`i(~h.tB]0536$m"rWεEN8ۆzQrhuQi@%SQ4:$+hp@r:K x8Q暷s㶵Vxó%; ;D+֖T \ֿ'ܸ~ w}Nq3E0%7(oporvcWs pڪ3LtsM:'ָ lgO*;sӪ^z1x^[o4ͻ% a|sh߯}}?z۹ы*QmU7MѬM{Xdj v#ZIn$Dt2,4u2V(6WmZj=O%)(*, oLJiY4^T2%IͰ36dJ LF㊔g aEQ6jU0O CD%|Atėq w$ϽMZ3.TnQ jlRG`cDMԑPApz̹uٕ,w0mM`iK*E9E?{Fn k_rrF¥4\>M6U<ĻCrҐK"RןƐ((iDRxז4 zn=$ E;l 4ڈ EufgAz!YI"+$6A`A"3q6֮3ku|tAd-/-rhB4R&#/YQN8[T2Œ3JD6 כ@&pR>Qo0O:\PHP$Q|d1Kh1A\BA!e:Ziz'mqu8N<4&$Aa32`:Vr6a,&zυJ-a0 0cBGNkOH:v^:''a5g; Q\w#MilьQe6t\RFAw6~Z2d[[wE=LEj|c㻌=m-[yV]2{Y揾sIK/>lUL\18w>Twzg؇a;^'I-m! t4Yc좗w9UNs5H㋶9)gI1%C PBLjjS-UقpY&3:rV&2, Ah޺(rTq0܇&Q(Fʦ6 Phz߉/-5J)A/y"yX!d0W HkV୞|fzC֐CU4t*nE ^TgLAjrju 2f(۪D|53MYqx"C yXzRkrփR}+( di:+}~[TxDif9Rj_-;=[bG8ϼ@Z "Y}̓D8D:iv:8Gj-u: /)O'tg_ ZU,F16CZ-4/ܑu=W Q3Ҿ[ _!S]͛epIcPG˟kgW'˗%AxR}M^>Wh6OW:{fpnp.n|u}/ϧ,€[aN>V]r FѯӶ`ٛacBkɸ-Yɺff6,oي3=X{ˎ\9<No{9k[UV/_r]Un[}4pMY[' tu(`G,Z$y dYlJIQ䒐CQ ;N':{u3x!2r_"u\ &NP >}-]}j9 Jb L C-@CNKB &{ЎE"uQNFC4ja5(Լ$1nST$ pV`9A*hkcԙ8woCW*|c*o?t /[1?՘sqюБRחw\_~Z2nOJtU_ޣ7ۅ{MH qggŦ(R h 4 z8=ӭ/;r1Mz.:8h!8@#w3Խ&uڭ/>j%ap#Ҡs>h0*3xu|0\6uEngchi]e_#-`^ilծy7rw}0>:/jɯ>;2zݺ pG 3AK\"<9fU;/,?fѾUl鴞`hM2?3i*d-GX_ f}e'Շdm M -Q4F@Y5\7Wo[OM[s^:77r9}k<l<ִ܆F 0?f9n٥fF#@I,Es?ڣ8)hlHBi D=Sba9^XXϗ}tJρGJڤe$}LlӥHEg/5{t䞇<ոHoV_U=˴Mϩ vw1<1k(, H[XH2JU:7]A=oPE>lj=r\/)r{$;DOkq'+ krr'hEXU<+٦x.VUZ'=ªJ#Ftkz%! #wgdIQ N%;>ә8wlaҧD18@xM-%8$̔19e@Y|%쓥wn)EǚPs5WfTI "Z1Z؝¿l3+.hM$qRMUd˭岰%) TJ "(bt6lSUv?m*А.:(d&k%Ԁv TF&]qZYVf6G6VHkʸֶs@[9AŎv/k:wIRM"4 MX'(?M/JDO5F(%j $b.1jO:@YDxK"1n`9iu^0lMkmfI)Pϖh& +Lࢁ:,xEW Oux}ZMŬal5x?H20\DT216(Iz BV'檞r;fYpog1$U-{[(jHe)=A`"]-['SxiS: JNie$\T Us^hXf C^Iuv֙8v˳{SH4!)֗('y- IaF%FQ"MKx)?7w pPI((IԆACdH֊54/rpQP3=Oy)5.6F11#c%'oFbA\kq/#C8:8Dg:BJMxe[4cTMd%]($/)cݭ 5V ֝GeEht@O:S56+joI\P"# RtAB&3]Wߡy,XƧ_::ӐZ}"瓳Yøڃ7@Ɵ\$&; 6qGD+ 睼; O}0Ego v<K~𖦿&{ߠF'Kym?\y8A'cv*DBcP#orjpVisbVRJ>D2J+@H:aԶXlP傃0ɞߣY䱽 uJG$; $Rr!<P]pW%"3SϾNDu"[,b u5nz?vƶt2jP=zI氪}?g-XA +5GYJ𫓞80%6ߋ!H-s&mVYMEP)whɴ\CtNip%*#!IabQk7Bs"߅Wrõ^)ԛ(&qM|f6GR"H,D9"\(-p, D $C,`ĺ iنA|9>n%nIG8WIS/5*Z RuTRw:,Pbk<'@OYُ:cWWuIp7/Bڮ_ q:ImSVDTu @D颶 @8e.e(,‰`Ȥf@lHIeT TqZSfb;IFG!iOZ -tko~ :\Ŷlk}׋a*n|KwY{R[-;c##"8IyhVA2ѺܛutLs\; 4JudqXL8x"+0۬U +7tD xF X`͟V&<,V XJ*p $Xc`U&sQWZ!v]]e*wni3srGׯ&zu\PBdp~ǩ>jg/taF3wFKhk0?CNچ~z%|2NWlJ@}ХzD\FO+q?X!UMvT\y4G(Hpj*^z.M"Ko3ZJT $kbsctZOɕ(6wd [B1#Ԭ%P/D%/P X,ݭJh%]bYo*1FlqI E(N֠[~!R=c1Vu|Iؚ5h\$gvfVK(ЛZ+ktfs{#aZσ dnv BBK e(c\ -Mt "o'w 6vVt[|>z.9N!fFJ縉ZfI1k2zlD9ΕiAڼ29#Ѹ!0X p`BMA"SYj( K/M!!>ԳT.瞍7aYEIP4uJF2%Fj8R4#|Vh`&|Ip'gGz% `FuO!F\)ߪY,WIC(2rc02K4 D'h#'5r)gVQ&JE@HT xhSYj tКM#ɡhNSE= % Bt$OrN`^DPB@j~'uq ]^Uskt#2p,OUmjt#iKLH=)͈ЁS(^m'{u ?L +Ga],O11HBhi 7~R['meDKjW5yoBxPU Rx*Cby\QPiy4EaS0]]E;cN]8!$U7َIk/Gr>׎D>;ې΁]!bbm,&ΫO )1 RK3[JJrX$K]]r8=Do4FnD3׎$xtq@',تgqlHc@3CM@"/ Τ&Еb[ֳ=yسC8~{Y>|1Gb?ꍯslѼ>4x>~ma<1éL00c0JUR0 r+i#zo{Rn`ڛ:XH_|XS*?p͋By7[o9U/u4L !Sw@r\!V^8iHi_amN2sW54626T+x4Գm_]A(@KخOp#3OhlpK`E{{׵*m%KY .VP‚GNJi"tֱ$'LbbM-wM/w-wAZ M`DyOQѰAiHp UV3\\Xύ$!\jPl0IQ9?SRΘFZhROKv3G${JV8gBq^5-vϖ\#5X+T;7>᠁FO3hO|A :xǏL2NG-X(@W$DyCA_>]+h8kCw,}`ڗ)(rNpU|\_gT~vP Hʤ; Ǘ͎&I&tƀltv%k=%77?jf{A<>lv;JbmV > =\E{p51Pa᥂` +,+,1ޓ)|U//dygߙ x$ )[XL"^ˈiDk45`#34lE!%&ظ͹.V4X;yt] CךГY9fk:SҦ_oif&CFG bS..9ntB,>#߿y"Oo>Nwx{p{BT_0p ~ 0g+~Y]|W:-fX~gr>e˥Cϲ'3AJL!35_X1O^r-" `BK 1?+f̴֏LT/|^{mF:1\ 9b;dI78(M\uhAˮى7bB݋Qk9Hʜ*J4r_wxӳ0 2O%ॡ2ޅ/ףs!:zW`cUVnN  [h $8|h[|GA5@]",w=UoFK5R`}뷠h7/>9;nA҈)f%C8 K-%AƌHhNYOmݽ^ӑ  EcRɜRafXTqXG$e;XSjxjĔ{=d.?NL $8|*'wkS Q7J)trr<>n~T̈)|<Nol#Nbz-sP܌NҞGXU=|M0[ƫbq8;`K=lAx2vVVP{sn ^7 l)m>b4pI) IOƗws@O缸=9[g%Ylm NS041ƥA0&q1>£IU_ |Ƞhjqx.ρwo|oO?~)&9}÷q;-GmK/_[--`e_S|bru .>2Cs%yKg-tp@t۱bZ~3N5˒|"\I\=+l`EfWQoQ}:.HPV`W@[Lm*RѳM :HHfquz9ԙZhb)Ѐ:Nk+?I;ߞ]G;uIrujRX|2֊+NU^ KKY&~PpeK]٬T6J!^ؠLaL )&w6CoݽzD/lmgN{ͨ D(Q! KʨRy T%YCA{kflV qin>y9p֖x~K_0pӲ,ŎOWu~cAv^1[G{nu@}0ӂ>>i}/&V``U6>_NLE"'dQr2JNF(oNF9%'d(9%'dQr2JNF(9%'dQr2JNF(9%'dQu=\Qhw2 2sQ飺p8D.q&S"` .,2MR W[@wuZ 9tm[\?uK<|ͧc$*` ($<3p8OHbaOFj=dT3Vy_uK_JyF"RtƣT' >&htd|wUշ/ɧqMg{bC{ZPv7$?ْ#Զ7`̮̗vauRb s}ˡ @.V2Q>2fZ $`V'|m;Ae>F TONM烾.ր H'c8+5ԕ 6K7г}O:0jzɇZz=x|rJ{W9Ъw#aF(ìf漢@R9Mr D`ity3H/**$3髹'YI%7ָJndïCF,B G"`"RSFDD b FрG!eLDwڊqރiegK+;j?XOp&Bz렳yu7wh5Z`(3*h2:ڀmT(p'&>H|9pB֗d^<;X^v*S9^V{zԛR $;|8AG9b (WnX:EqvvqYu c2 -`GJ*n4WxT61Fdԏ~Z|wCH/DKBB1B`DMo!m @߶ =g*7yǚ#R;&"8yNY*@ ;5 ÙC 3d/L<iX{))qJ7Dd%0laFo옃L4ME ڎ>㸴(>p/_~CO1/G,3I, Geڠ RƀZA:+#JsI3¼xwݔד} , z nd6os|lG_WNM:x?wlHT4T:cZަ~J$-hz @VD@Ry4NS4Nz9Gn0J0eE$Fhb09V JOhO״weɨlM9 vQeb#[`e_:_96~ɟrEibC͙a I.tWHagaV||xj:Rj f(Hd_UeBZ|;9CP5*Cz"/g)Oc.*5?ľvNsj!Wө_g~yyz+5GPfbOBFsU3Nzw7v?N&ףoIJ^^s>U}|:tX_J^ ݔ%{w`nq ,F{Σb9-#ͭTIua՟`&M-/S3VDH20@* ű/ s &!L1(5 XS%b*c΃Jq"F8[ѽwmH_򗽽Hx4^r]:uT*N6vU*o R"O9\IEIm6G붱vTEh=)lCw~mcwWw)%\JC-*iwSp f.ƢA TR$2A3⩣FksNoڸh_u|"N~͡wSw8a0K>v7q۾S: C9ΡPVv֟ϕDpQq>u%qXW^ٛo}UL_rTQ?x;8Xŕ3?NT=.<F-5OgP\bL=AF};e`GEoːe#կΗ^V3?B=PbCU p JmwqUK&N=_薇A=]OѧNVb&~MDw nK pٞsk/pf.%ܥ7 {?wi>,DeEo9T,rpd>>7Ec!yl_v E΃eN?Zх&?KH5dizr(ÈJd|IsYSy#8iŸZNѾ 9n4c'-oG{L?s=o![jBH-,jSZcƠSbTrKy6tNԮ{{|/~vpWO~Odڭ\dZtm`;8Kpb&"&10L>&nc xv@ge_%!N 覢1Q2Hdi/֢BxlԡxbtӱcFSBtPNMfEry QFI{tg(nϜ6t!iiZiSNZ-=[_/;Sv3ep!_xځwa})i"XK"J-1ETv(]#N\E&5bCRL*b@0%lYb;Iv(LP{<29] t8:??Рr^{:g&C^yɍƱ)n9x}v/Ar:pLPWQW&Y (hnD.bE$ Z T"0J%@:G§rX{o0Ku,J+\"zٯB^u.Yn ;L~Q\.y p@U= a*~b< (8,))(D/(;Rlۻ̂^J6 K!`8Lh#5vG=ICD vBH!d3 tsTJNFVH?R<}\g 7r"#Z&&MAg;_P|PΗ&u9X-XT')U1넏hU6M"PQlTEry;kt,fG*G6h#7BAYAC@u*)V2IА:{F:6>_Y' il?as'}pYY)sCS8\o U'Iy=pkIsDp{x-oTxİk8eB58/>i2AoڶyIp-ˈ.K=jtB=pB>ZZP2V:u D)J@*ٱg~G>{=ݽէC} "q!iftȭe6T`E=WT{WA9V FI9E 3)W8R!۶*Ep }N9U+9ѾRpr7ѵmLeCl\$ (:% #F朣L 鄣%)K1_45͝޽< Yϲ#StSR!Z(w P| 4J5Wʷg$LBQ#Ր9*0tT.=,4tJI#޲FΖzVտ_U:|NP3B$%!Q-zAI^Ic)|:2ASHrS6UKB:} ݠ% Bt$OrN`^DCFIs*G;W5WO]IZ:$Lj 0KSsH3JfD@@D-,8l4b‘=L%j7fތ*<VA,]6fZX84+'gy)h4D BKKp@mh :mk/#ݎcu=y.ךZiy4EaS0]]EX8ceCȣhGkT{Ҏ* z;Ɇ~suAHY}:}ʑp :l ?l M덦WR>K-C odɴ%XJg+I?ājR+F8tA=qzdH9HX6E0џ`AiHp 1l՟BHB^&GZĜS)Oe5r6sWTPO|vos5cV{+qe5V9@ɣb(%%aBu%J)X;{}{={|=C֠;Iۓ 2h3Ʉ2R8BJ4H<gU( hm^B;X>|{-[4_W㊨aT E ȣ$_zj E + nA}lc$P*dUޘf.#J\y[xs;0{p옭  bnA疛]JiU,&j)0馩܍njqa> YV|8OMonp4^li ^'9:;(>}nb\3Z ?=j|Me׺ޠ~]~ۻ.}]ݛ߾]S/7hq30|ΚHN0_еjko5װEל9لoӯns >Vw& f{~a+7moR;v~ӝĈ\WlTro&n/ Uވ,֑fڀJ׎@SNlkĊ ?{۸_~, ;l[Še'$%S(iI30l6]Uv|F!FXk}`1U:Zh$hϙAt~ȣ!鍤6,d 8^F|'D4**8ZTy" RY$ 3.u:lѧWr5[;_G^\U}q◸[m0\VޜR~7_F@s_qB`)np09sH[G‰uXT}H4wsv--Pű: 5a:Z"v%&V0PHx3jg`100gz4‚S tde`*7e9dKJo/ YڶHl>Q6vIE>N ld<ꛦ_G7èӤo,49@ f+Pg}7-_JVNZy?ߥz8 La咮!rKV@5|$M:Ի53pjU4uoxom`<ARq!rF0u}8w-q1VYE 62( F4$;A 0Atȴ7ʘ3_x6RvvKvTo{2m8^h[ yo(SjGa)SCe& '>@OS <@\;s:1G)aå -ݎvweb.0"BM giWq]BBR?t^poVMWM涥j+$w K=ydAvOu}(/O/t{Vv~UX/XLnV|2 ^}᧬XAev=ɍi흿|jr-) S%buBh{{z.G`-]Mu^DsXtUNXdOㆩ*uRVkfq.; '>YoiO/|__{O|bw)]^cX,nv,){nZynZx\5O/w Se=|s|%K'g2D'HkKIS99NX,-,xGI4 G{YgxzA@w s\e%_v'`󞀧yK**^EJfe9ȁ{MKѺ_GzRopwrΓ03V ;ޚ=bݲFU~H{I@C&xd4v'dz2dۜdֻ֕'#X}Y˖K B= ݥli@/~_is&#ףMۥJgM|;$~wjY< Jt%71e~󞍶^o#wg1kz;?)Єu;˳6Ϊ&=cZpt7PYOo;$ǟ.Q~?(:ҽ#i>';P{ עSZttqt@v ;@Oi~Jo;Kzɺ&bsZ&vXw\.Cg]{]1/LntM/Mt<ܫoZ(vD `P@Drű s /vŎOձ_:7(˳X^WfyEtV>4H0J#^;CjYB1`yBZ6NĨGq5Mǁ9C(@^B)7j<[֑aYqGkZ ؜3ٝ$t"7d+ؙdv-W E\%C;#+r$3bKʾi* ux=QfZGO3xRL|Qo;O`HJ:ֿe_]laIr rͭ `?!FA= q˂K ~ x$ )f;cƌL"^ˈiDk45' d>Z^ |7fL8Mƃ&]%1܍w1/sNSِ uh]F!U }&R;4]fEU$ݙ{m[Vnܤw&`o \ߩu^n1~,Fd'mn#ﳞQ`s0;3|XQܽWس|GބȻrNVޮE흯ߜuT;K%?yDY 9+ xah ίioZ0)T<ɹW. 6,jL#*{ü7wZO`$0&uՖrfĵa)F9p}@XE/k38 nM{#BIg]S},Zo H`HjXz9N`.0 KZʩR ˽):nTc]u[{:4O+%Gk8^H1 N9ťoW12#XP1g&k_rG><Ŧ4T)*o>Nl8m&~ 7a8zPƬ)YhzGn$2q2>ӣ䕧GG +IO;c[K8jcIȚa^>ȸ](8sBZ>8 %|@ju[cR;ĬW逰 )JF7g)O.tQ 9$ʾo\|{FfukK b,-+LelKp \^_>lI)m>􅌷!)F}͠]R|:l,[}\HAxv=GKԗsf4~է܀/*nE߆IW~yyjvj=W=:>!DONp f[naZ#11S*Nj0 @h&Hςϴx#IEOnTS[^8wbkiy4EaS0]V15ꊏ;uCn;^&0+Vo{EǰIQ6)&>~|hiH`I@BOΨPrWet䰰=K. ց7`Z,q$O'=q`L:dOnP8OiZ,=k\Z@}䪽W$`M .Ӈ"22-iŕ$CW0`U -}WJaZqŕFV3f8sDEEUVn- щ/CCĉ'wWO^wqӀDo?VeA!GZ̧מ"iȦb۾ݠrs(b:C+atRVL@1(f|zqR,3~$gBdbjV,*O|po^N7tǹwr,{u cT{?S^orWW"E"RHxb ')y`NE*cUnbר4YUJ$mf봋EN+˝e)8kdalH#<P+M 'V5S*ݣPv%z4ҟEMח2DuVLK$dN9dNi'_uf޼-,4&24lɉedɂP4FQx-saXPI@7d3 LQ6\M0ϺӇVPׯ>jY|{cLnBB>;8}3- 7M XM`AE)7PH& j  4ں.Q7;D\px_ fr Ip}?f}*9*l'+PV.ްWsbǍQ-iI2%-#KdY)VIuѐS6mFi#ĥ8bl^O{?X|1_ )Z*HB"")n@2EWA*3Y#hbK\R\`+f:ZQԥ\&* ʐ3!p*+c5H  +<|.HFWrQ2$-TyaB$mq4v-I:`+tL̚U EN#J4Sa{$Ԁtr Eq"7qYi2d''/JKBMbBuP4'R$&\ooog>*`#_NC%̚B/C)B4bS9Ro0^eݞp~ɥ]kbtD p'ɲYgs.DHDN8M`b""D!FQbԻ EI&ptDyk4PMyw$$H (;/x{ȡ^Uh&Iuv>SܤsvUBp')"B0,5*q(w2۾]#R5zw j&Ŀd\8smM ZXKXD6Fn-6 |8*QJ7:Kh"ȝ "tJT@ANRpc)=QEutgM/JN>eŁ:c3v٢b=c:t~:fهL~6/o,þh#W{ξMݴI˵5, uҝl'}}Q6 jϋ;7MQ?NIzơ+LwrҶqSo^x\km&r[uYm.䤒kg}yѲ;˃5z D8.D2L,h5T"=&bm3םݢq"7c1[d񏯮_OhV;_;]P؛MEO]h<׍7T$52J1|jk%Fk*Ř[dmʘ'`>Jef5x-1[lGN(׸SJM]Ur<.ؖ\0 ւo=N:78;q+ޛCP]L'42-T52Hƌ'7kTzW ۄ՘1G deY=%Mq5`ţiYSiFYg NZk]DrųJDgr`V԰=q6q ΀iF3`mM|Bo~2a5Fkfšpl"VZV4^Vuc^; RT7wx_\1 B 3[HJrm&nؿ7_g7JMS7hƉ dJHi 4x2Ie2$VNXnͷM7׃Gv>=b ۭzSRCvS4y2y4=\%.4AZBw*F(2o#[YoρwQfS`x,Mn{E &zcΦ{Zd߶͎Ō[oBמ]?z7(3 k:wI!Y<Psw%>g:k:k֔z.hCL$l)*`wIte0 uڵ>*ί_&2ZP?{OƑ_!eTǫK:q#g1v W2b&)hZ%]ͪWwT0TA mSeY ]^_C)ZJ5{? zDb eHчP!L2'sNz-tgm~EeOx%r{a:[ʖkwpKZWW'm^Yᅨ2B>|^-ݚwwȍ59eh{;;$0+mN\f畧s3wKl[Y[v}M{k䫇O^f|([ݞ˹p~2ۑy.2K^[u1.rGJ<;mnyWrCu5=dI1 MǼYeG}9"TT"͕l"M5Xf1هvKp8+cOK3jJ,Z` x rSt}԰Qn\༁O߸/4koijIؼ/DB>M7FN'Ӂk"l%x-G@(mTZ:a6vǭbEoO{Q; hkrJ9f')K.0hI<&3xIs-= |&*^pj|dђ)8h־&yP)+3E}I0M4$#\2Ef]3vn=+1XŁTH SB@a&YKDH|HY%IE;iZ|XOcƜi]ֆd@BH!h5jn$I`7eOi=)Х|aGxM'O"BlIsSO_1NZ]Ro|>hysbH~Ƴa;}/ psOàq]2> JW(8'!LdeAHܸZ:^gc̃Pϋ#OPv҅"#Àft. ]e#q9 e|9Vh<͒ ]֏VT١ J nŗaOɨr xG=Χ8oOiMNgnOp썹g25hPͧ.$".,PT YQPEKF^J(Sw܊,/qؗ8Z5 oUNH5 [DiR=JFUŀG-׫ƉPe$TdA*Hl#s2c&q߅ܻ-r ʎ >;kW=3J%]ic^'"8*&)c@)2gmȲK8H#tIC:g9!Z V"Y1 ޱ3pv]JFd4k vh @7Qdi^Wnu'bCrk5.]vXp2tsƥB4Hc%գ8&&2VQ*+1#r!X`譁 EzK4P9ᄠH9'tL+c.md(sJ,„1kuQ&1 %ZdSp!D&`Vg6J!e&_PP-Yht/1*#2 #DAtL k L>Tz: {Ur{iO4E&AƑO֩$]PsGUh{㽣nu1brpZ JaA'B$~%$с&pݭ<Ѷt#0)mQJtmp9! i LPkCW];/g6CI3hv$_[ےiePHgZ򷖗wTz9ʁ`y0 87J8r'xQsKQ1Y[(ZT> (( 83 1ĹCA畵/bN H4`N̳Rh8!2r a"p¢1I fel%y';ّj+ Q'nOQk- RV8kY0޳"A(lV&m`9i$z>'`g2Wd92RE*XT(@JFȘJh]V:HޫZZq>NzB'èSlȤ5H9S6-DbmJ$2>)NWR,aQ 'ۋDgMDv3D _nBHZARBTbf+>Ү0PFh XN>zHMu`# B:r)}9.)p=QecB %IOmX=8^E|w.I-?w-qgr)L*XѢњ98u#CT?PgR:ÉWtQ9jѫ>~Y8؈9&#\va9ͥEW*RH+W/&B\TpUJ"}4h֊X/)[Kz TD5On\O& ~suAFό6SZW 8TabHZx/~+m44bD.&TD%T`E2)\%PI߄]((IkC*fmHI4`ә KlV"Ȭ݅||Χ?.)HrO_@^v؊6YLnYȼJ(+%q".5$""dM}->:eۖlN#M9=J/ HFaIuKe:g;ߩgt2սK_uN-6Emhn+z,K'YY##ye{|B2{p%t\XzسPiei KojJlJC&4lzy i|>oU/L%*ZPTU`thSdʚ^ 0G#hɀ4*"vR}:UJQI9dF_^mNScp@ :g(@,J@mrpkDƽLٰX_Ϧ7MC%bvW[/oo:ls]W[8{7\Sl[нg]s o~嬹N=w\IZ;SW] oH1-yytՎ./sKgюsxUwKMyaj~=,o~v>Ύ78g{|͍ 7v9{᭳c5?ݦӧ7ݘZ?6QI# [7gޱonC2Qc ڞu_.K$ ^>J+{BVCxdӻk3_cvK~Yׁy\`B'g9mdewrf0fopfx}='˳Yy%e)<b+K!Hh6NKT * o4' 傓9F{G"e1QB"i4+D<ʍbH@)E/N,Fzr1Ӈc_ON}HiH\OBQ\ʑ*#ZX:$NjBd 6X[J؄×/Ŕo[I=alT/v@eGɷѰ:k<R pgY/p'!(fJ6`d1t }@oe_%!N ј"ؤPB$^ȴhkQu!@`/v*m|A0 jzKj.zy:_ŧJTF> C5^Jt&(UEV'2J@5c $[t/ZeKq{}) H_G> P>*vrQ%Ѻ|]«KW-RA82T! ^r*i\E_vg޳GѕGTf*sof?oAZ y"R je ,B#gR*Cs"g KD$,P9E )4JRQk>NCJ`A  ~tUM:,(#w*p˨ydMCSj0%Kӛ y+rSޤ%q*)XJ(ʐQx4Fzu1&  ف"B31RH_ ƃji%3( PS0$@aY)FΆ;OCcx咒HR )(2>z.9N!fFJ縉Zf"NG}9рDMQY6xH[G胴y1)9#Ѹ!0Ţ&PN+?ޖ^"gZiy4EaS0J>_Pһ3{9l*w,ViXϦ7͚Us:mBj65498ɟk\\[RP3Wm0G +i߂tr4~6iMԿ [9Hը)⿮ j/i::j)g2t2!otFgI{8+}Ȉ}?Ad}1>%F")me)q$J9$j]SSgw@:], ]<)aAy K8^P.8-#X359ޞp rVʡZ9T+jPCP-Elpn K%>y(#vTjVxc]jΝvxit͓貽='ydm##$3%po<2"AGF -FFpCr8<-s87-s8Ltc{[GX[^8~4 ApRpIm8ed`H+.5NcG&j̣UhZleF͝Q KlIQ`:`) NPLmǵ5pv>،SӁ=[f436y_=˧,58a ~N(KTjB/?b^fTy^8ރ^ӬZ$ jGxe20=0x1d@ig7:{AT -o8]ly3EB]uѫ:kS*&,{s^~+ތJ?'G!A3u^[t 9#&"ŽY/Fh0*񸴠KaX"7ɗ10'Y*1ҦmP@k]DF1X3ҾB#Ü;쵢*:&La*ZYrUX1"z b% |( 3ωEkđ#hNrD8#b% 9z(T ȣ|l`cVBc6<'SBŔQDڇmP| mX$(V,Tg(iAmDPY!=AӊwPр4^[P1I|*TD%V( 䶧`?|rN ;͓q$7v1+6u1}i3p|ꋾS/< g$E1l~Q'lo&]6k?%]9PV[Li+>k^u>jkeY&^ ,Rg2DoT3E4^J9%\NŜ(aH+]S@g l"}F`5'iճU#&1C3Dp,pC?52__Vw331 :sh_u2S<2SYq扊SIP;IԃTjfs L-i6d\ɠՙ̣Լq>IҭQvsl$%4CO~7KǔWюav[m@t LMga=s.ݓN ΤdAJA۟&6 h2~g2cɌ)xTUS$xnXh' VSk#^`SIhcú=m&#Eie^VRFHDrF XGt@Ji)ҦҰ,p) R5mf̌s}hi284(7t44J3CH6`)"D J1uf4 ={~}8s]NM+a@U%lkrTa5 6_"ڌ' 0e40ic:꜅TBQ7bdJ uƽ⥒`FZ~KNhrӳG+>-Uݒg.~MnN.Z4Vx]3_x=9qK DE* ɫSӟ!<_0Vq%WkZ4/X2؇OŤ)G5SqU^uM6=+#X:Ja#iaХA0X$q}kR;&@דuu/NO߼zMǘWۗqWwM Au+[K,-Z֛.M[ciJ| YW f&*qRy+c՗};r)Dz TFx֬OpB0}z{ؼWy)MM8sp=_E>L4 U|F#6ߤ@-[ŻOq'ɐ`k,FD*T9!"A`CGC %mʼbqaÎ՞ҨsxWQa2ި1:|t*H!f0຤quZ9٫ph+`γySx *>Mx/cJȏ2dKz` O6pP|qP'U%Õf**p XSs9eN?"~yz;hv48f;(fA_T U HՊ*%,x@:IL \tBRoHAL% 9t2t9ylj$tCͻXɆ?QT}Ezɝ_</jz[wWG@伪b5F@5xc3P/Oݴ:QM&B+-:v48s(o`kspp\zƌƁZ1b"iZ+Ip.}a*WEћӰ }Tl()kJZ|%3u*(CL^}n~ .X}Fh#=lW]`XxKlݬ4w}XK0 `3Xg$֤Ϟy6Y$0 ȵ/0E|gOtV^-? -!rF%il0RFl6EBs m lr6Yh䕭,'4H{DdU@5Ps)KEĉTf&e/t e[)2q8دt[4-1mu{]CeDayԈ8bvI´`IdpF8*M@2* BYUK_5yʿkT8ʲ}~mo{DLBzg>F3年qIULJJ-Ѣ2 Y#|U˄4ֳ?Bh9B1F: lp%#Y@S ]TkXe9B; d&kEm/(%S붲͖YZJ"Z|TuFxI}ւID21KK~18ؗ9bzMݔԊ__ARVD2)*inr%)1nSŀ1 iS8vVw( )7j<[֑aYqGkZ )$ݸ7$t$i_Nl/rkr4*Scq48|ʕYזm#mzMS"(i?Xwu;El1&*- *)26y 1 "KGخKW0b`G)Dr4w\Y.(K)@gZ8B‌ԏcu6FЏjgԒb!)Zֈ4)e53ǯ%]c^DU`CL )JrU"\}ITBw /;#g܍n~ r<^թZr4!^\m.6Ը-ZhpCɦf]rc*fvC +}ӓyf[tcDIZSPW]6ldgGD}\0ݥS9UtX+:_U85_|p5-=y7?L';[\u[dKmr}膆&!XVVmӺWڶ mg78'ڡ[7Vjcܧe@4Gyf?颋M6WULRƊ ,yAE5Xf1o];_eV/k:4k ZaAdueݮR왜PBz!)6i.7>k^e1Χ8진gyw7f!Yz Vأ=#J=|[T'UJ|z1$diPeR匂*Z27dI/#ۍ7[ 6&x{T?@FHvQFSC}A*c~jCGSz/jz?*JDPwWp_}|>>;s t s >>x!c qZtp9d4ihzQ LR!NUJVd6>'?v$NuYvyf?,7RI{ ef*IRlRkJ mC`cb:">rn0@[LXĬ&{JܐmT谨|\ՌQͮƱF$cX7r|N%C,=HܸɆFyr/c$cS%H!kd '5(؆J``PO83!dIRQ/FE_*pi+WUh ?@}ňtAr@Rit(\ ]pMnE)1cҾ,&-J. 1'!K6D<}Eu+rHM:ّ}fl-Z3n, V*F~GN>r`"Xr%"@FG/2jr9 %3ɭ*_rv|EbL!d ZЖ_)FlҘnz4gR+0A8aǤDlƒIGɼYgl)g\YͨnJLQk2- RV8kY("A(lV&m`9h^/z =^3pd H)S@7P3O' ?ز Pcc9EnJlYkΐ49|cbYP2 :@EA"pgY`Km̢c<9M-tHZtq|s;R&h?>Rc]M%5gOGdЧVΆ~7W]$+R WPVd5Z;;X,},q66:KUޖ!q]<w!A"ڃN IE]ﭞf7K6 jkrITlBjU\§lκͱӣb{i} OPGBqXpT0dȕSz %8 a3 5 ` &c6}8鲤M*Ec,VsdR0+2D?};GnrSf|#kj>8OlaRJ%8RyQ.xHK9Г3uh}†CT:9;ny-_/kәPq4HO̚5~t38]Jt)SL& fNQ0̱@*SgllMk= jЛdc*)gW]XaȲ L"N%EsudC'@8h (PVv^IG6#{ѣ"LgiC Rd-A) r;ͽqK[Az,6^MZP9F '"; s+*`ј1O AyW NͿUd X'@lDžgS 0ʲA"|0=9OJ1Ro!(>4[^U|.D$d Jp-9%D;:3ZJ.㑟ŭ.54ɏjhS%ىd>pzz /ƴxz<^g›u`v 1]shx[v-H4O?^7x5#zR'Pt֍hBk78xR'2ŊOƣDϮw κ^ zm+تYYj_'X&0AQW&Z?ƿ~(jiۃ8H_7߽|?ߟraOߒt6IQoo~Gצe]7ub·v9~&vfqek߿&EhUʮfeˏGlpF&i7R_3@. dBLic& 3q2M BBP>U<;_f!]'ӫ ~7! q=f5mw.c䌭9EMf>d$YYm gv_;Nd6 5* &E2_NAxt6 ]<%lmtI^M7KAE mw=R׵q4,Te;h42}f~9{B ޼T]6{ ހ1aрqp5HN>˚| ܵZsI9/U.)Z*bâ^ #8QVwfa՟U Y&hQgOr=?]?{h NJ\ZF2SJ_++m_&bfxu )3He;AnxĢDJm.VzիW)\[DIy[mk?ϋj+ eP ]v}yQC .;3ˤzg_Z!W c-.,kYpZ0m}ҟr,X I!Ką( &T3A${k = ]&3]ہ&YP*ISF"Mp FG$6:g Y#z!Uxd!R&R/1"ݛ+Cʘt潯lYJp ~;M\C~gWuCaW-YԇU3tKKynؼ{0\WhVºQ\nI8;O) NV)]L>ėi>Ej]֭3fӴKX;8bV92ʦ=/Og4|߂燃ҳy[:n*̓%R0 Xs:iӶߓiN_~SK ȭ揄)!_CFӯ D]{\ǫ ʝ'%ʕZwZ$ӈw}[`z_[`b L҃t U˟+(w,sm*;=<^!EEX3G˚E~)fXJNI7QmĂ[&ccƽ2J\&H E.(qCvYI${+%R.s19E4\䎖kmO=2{t4_tq֣U+o/%^5lxk0fp<}*ĕB}fR>RL@Gm7 ,3AI֌ٮ7{) q}uʬ U ~`zv|N5Öq񷱛'nP~OΒhd@afD%$FtY$H ) Ӂi]I-[uz)D9$,r!cp88$ rBXGT䐰[N:{B2FHHubKFb$T.8Ij$fXrؚ N:I$mvȘ x G)+KytaV.OlA-lK9`hi)1H)hY17ɿvKoϋv:"?Jb:e85 ÙpwRWfb6G;LWjZdrWV7|hjL>n7HxL;$aZSG$28#QFj_2* BYUK om ZT l-6ybuZ_ lVzmo p 4~MkZ&@7:$VLpR;'4;XwBiX =N@;XHoHGJqQ1T)b+t@l%#Z{1읷^HTJaZ^_>bn\C}eRl(t\mPL]￀Rʙ<-߆I |w񺥯`e& 327ݶh-^qѾfVl`ho ƶ )WB#@ha"cZ&*3Hl ꆗAb50ٛ G%JOص6ɞ\Rp44YV Yi z>7dEm܍{\F@j.@U_}XPքR`pJ^jQJ' i?>oízx7TIz_jO罚=_mU?O3Z&¸J.$1NnfjV$Nj~yU\+ep8"w0MF 5z (DO=mʹKib'.৐yxܬV+uO4M"M,uxmI %ȗZrV:ir;?Bɹ`1X9]jJ}=̷';~׮z׮M\kwU~ v#.*)J ͉zzrՑ3KˀY,CjA޴h+)nq]^baPqWn *оhZmou*'\\Z5~HJ_&'sUlVJ=KgNlЦ~d' JEΟ6;x]y>W3@wV4%x׳ajڛÕXI+CӦS{EW3x7撍ٙNKz p~gгMV #h;\0o*йrf vYy_,(36%ZȕL䒊[;uKa A7UBp(U.*5@n~vB;.mGacJk|@nBzJ3%ͥ&ݕo^exą KT)#O#?u2]]%*13TWBp-+!'q"cQWZ]]%*sTWJ6_C;Ǎ)NVQxpާ:S7n$@s}Q4%%E3q"߾%tA|@Q ~EƆ: N00_ DJ !@wb >e5.Nkz]t1EXG.h]]oBh5긌•zQvYee9VmɊʶ u+]4]VyM7>B#tLH., DǂZ#DeW3Y"9%),(NJq4T$B|IIv:u|ԕB07=W{ow?}O?U~`{^+ӬZ$ӈʓb3>Z %p#;ĝU;%i8E#2,rԥzvoUO?ë$򹮼_}Yk6b vZf%^`^68|)݅4|Y,+\ѷ3r!.MgliW%Ú(Y({[ |8RY"U @Hsf-gNYf>mw=~6q7JT% Q:2B >(-7N7lҭH o&XpDv=({+Y)Gg(|19uJ@͌R/mPVq )&]ǽ:[52 %R1@ b8U ;+WViF: {l}aY!=k“DN $Tkp˔esC^0څ-\kS^Swp$*9IOyKa+[<@]U˗+S$q)_[sťcK?@\|@.X%*>HY2iG~ٻFr$W6؇y桷gzi`1݃vP%K.Kvw`RJMYrU#Jq|dYYtC Z@ܡx$ndCJӼv˽%G~ǐU6, wk{F0Cyj[>\C5'B Ö}{9k]hcl W/*7vļZ@Td-U Ũ .m jmd|>8@>@?@}b: 9 )[A: BKÂN.H.JHс&p]7xA5jk ǘ/k9hR hc aŒ&0AA OEnXt)!;Sّ}foŒ[5T=|D<KEAI^d\eJf+^rInIp/gGQ$91H$Bv"8m ŜiF+|ay4B 'FFnAXdNX41#Ѭ,0-dQ2de9FΞr߀:ƨtTbZiQ2yT\vڤ Ea2YnˁD"PS_(=Wx d rd"E,* @k#dLek]V:HCjP+FIFbC&"yd@XN۴#yV9\=̈B+*cΜMyHئTw|Z!nV^gI' Y^fD1d^)=閊O+Ֆ{fa륭^DhAvЋUc ,|%=ăhrRH,rFTtL;1%Ϩ|N.5Y8ۇ+ 5qrTkΐ4UTtT>J1T(\Nf D,B0_m̢2Fn;jS;X,ʮc,/(ɶÖb;6jйKUޖ!q]Υ&JޅSj^;%&-km;⧕_cV~mvuDAkcyv{{+MbJT,uClgӋ%*vBtӇd^nhHhTK)a@([~ZZ)9crև΍O5Y3wPd^+}.28OlJLv)RH)g) Q9´\XNdʹATE 'ASq^9.-yڗ(,LdY&^h \ H9:!`RٓcfY(z? ůN#=QYTx6+EBx+)hK^b/Hu9zBFz5IkA]L1HT$h̑EcR.c7! ކȗk? VvC]fOM;#($ ae|,^;}~{4lC;~=R>6^q}&i\[A< $|I~$VCϷd'.|{5? |<Ӣ`޼6T?.ޯ ĀvE̕}8Σ|n۵ `<|;k_jrG830Փ2=UW7 ݬ3/$ ǓV? Y,V|2}YNpmpu*^k b^e2R:V>hR Iuu`2___=|C5?qx}I__~?G./?@O43YκH/'/"@U{6]K {t-EQ/&|~m!w73;+ ?8e8W\"+?ej,q"Mb~5?d8%=DK_S8gxZbŇ3]n.@es =?b8 ;I^`@`!K!l tĮE&,H(IzhR`Ui'3钔"szWw&IȤr-,]72D%E*:'_'D,|u*_W8gn>`'{U\'R{hn8sT*}Rgn}m^=LNFx >/5Yj"嚋I_H Vy @d *& %B >5%P?Ob*Q_RsHo! ƣErbt ;2 Is LZI笑9=9 zqi6E=믗87/qr;JQ 8(KfԮ H'l=zh}ǞI׫?';\B@h'=-*O-+xkvۤ9LBwc׳ :,dܻ+^sh笲#{1m䙽-|3q^?Kfl+<<>50dh0uJ &&t=;P9l"yzJ:WT:We)+Z[6Z &TDeA`E2CC.''PI!)IkC*fmH oG̅x)V"Ȭʫs&\L~rBtur_ D|=v 5oq{ =xUL],lkBP $q†di 7!"j%b9LVG s iUeGm*P itsܣoѩ-钀Il7F$T^jfFn ވNIf2){oVzW<`vů=#yj]xnܬЯDV#CRiM5o)FcE{?;# E35WGhrqQ#ۊWˇG]80/KmV,'$?=nJ[i.,g4 A0vӰuLtnƦovn7uUYersr )O9V81 M 0E)Rl{/ >q"-aݔpXܯ=c" fA:3kJ0ᴗӑA2{xȳ%CΏ,XPPliD SKkjbJw$Z6na'YqE6<-z-wJZQtvm-}e:ëqBǼMsK3:=0M'p|/qohaNZ~:5OwYCQ~p[)pR!6ir jP [1+Cc M73aB9OI~Kyt~AgatJ2)TB eAEAR{6Kimp^H'LNPg- J9߇jWRqLD.r0Qp# F4d @gL`2딑Vzdh-z |jUPzͪMo()<bLS<ӕw3?tΚߓ5΂Aq5RЀwq>(YSwni7_B)ུ $C=/Hрbp.if$0fS l諑֩Fv@IҺ(*~nN4蚡}6=:2׏$1!Bbrɧ;ǧ *jlsm(brx#˿B`TǫK:`'Ab1:%)R!)`nR&)(Qr;]z]zG<:!XXRTK~>DOPK~Z)T\+zh(YRӠ IR!)2%IEq\d\Xύ$!JʘT&H@xF)(z!M"zep%)2z/ }/oՒRAo7?DR ʓ Lt8)$}md@@O##N\^hG)Ga)$F!r5bbh18o3 RHF%S69k5"Lc†7;x<^]L]ij98j{t?[X;L%]IWm7ϙmVn»O=ssKr#hM ]uaݶ=쳞!S6I7g|/]0|ݼuoWݴf\b救!oqw{y;J+TaZ*TY& }L`oo }{7Eҏx9 볅"Uw 㻁 ;ܞ(+_{-L{O/so^#:vEm)pͫ-%, VA@8 .Q)$ш NL#J˦E 0b"E4eF1Ik҉"vg;|r["wzӝveEK/+RZxjR+oO6OT"**FAËP| !!\,[dIjBH-,  YOkB2N AR-lB&U3g;gܬ< /,兺0/=/{eज़y~=Y곫fݛ h4}4_9F"OU0,:Iȋ'"EEi ^# [H֖ i!602Ry NBP#$&DҥKc8µaC)̵Mϵ{ JB";-(Jr4z,2%t>u!@<ڻQQr6hTOc=Әє (SK\p^CQRţ&[E3uSZOiMa{ /E%mY郕)X JW dZTԾJ <S@HR gDf̩WU6bZfe%IJ(A +54rg4x`O/cVm{umg0Ƶ6ߏL~vP*۶Z5m ߫zϤ1dyU36&wdz|zyJx%J)+`Db \q)x}zS~9@ԚaTf:ZP1%@ac:C)k1q]MerMHfEy:鉶6SP8;pOtaEѠ %Vm,˻vh=ԇ^Ϛ1;u-a;b(i7Nd摐+<I*шZȭ7CQ IErY#Tcu*F^"rFՁ*Ř[$*d/0g;ߩg9kEڪ%v-R"ԊYU /馧udMw׶o 0Cii=^`\]izttY=eA=a|yò)zxbv 4q SJ*J,Ju/ZZm. d!i[enݞoU{.wH\ tpsbݛܔKٞaʏacfg{G[. cO@ayvAjSW2ᘰ3樔Y&m/,Gj!);eUa(Cv24 ƭ(J'#Bo.Ԑr @iLdNLBF`XPIP3[`OhšԎ,/yYyA߰r>BC8#N 'F!2Jں L T[,j@t  Nc6`Y8uŗGwvv>Ց `4MJMLHR ,*oUQPW([=/˅FVQ#en6q6V;EBPȻ MUbJQj>C2Vr:.n%6ηv+q emF{w9.¨28&XV)(ě^!9CBJE_uP,PJhsYCAi4 IȽ )"Z$(@n7cPq!N|ĺSϐi Џ'8ƒiLhxD2uE?Gr8 8aHf,GT!SQB9aGTI)Gv6ntu4#1(|JĆv0^jpak&R7vW;_ͅNNǫ*Fs c#(\r1GHp,mD9qL?]QgjGň֮_;_ V;su8NY\튓h0VߝrsGqZIΕ@ܽy2{ ˬ"Ǔ?]R|2]-6zv|8ptUٵj׳#,W僅@d49y&E:0Dfḹ=Ë3D?o>?2(q;0GQ (y˻*4XkaiJ>uỬK>rǺH١_88ps ⴳ>NR-aV2Gl$RΧ=7f-f!}1%VJxt'q9]/6GB~ḴoH,Ƣ9"dLF{h GI"# h(HzjoÂU5q"Γ]%M ԨdSv RuTR:Eu*N<0jDDž^=dGgx '&x (!9 4xQI2ɥV ?G8V˃%%sN5|Ա3t-2yx;=]4 e)bpTQ퉷kݹ?nv@ud\yo|moM]w~7dߠ1 0c⡟-1<4ΫIP&s*FVƆTe&kS @ͫ꡼HOF%͏Y˵pAb"fMQ߼3&HG'- ȬӮ\rڽe(h4=+vMպ-l{M4-u;㶔٧>-4Q547d`[nSc@7Mkx彝}LCS 6&%e;!7Wo)o1Y~䐤Q>Ee+񖕾҂ xtB<Sg1~1%A1p 8+͜hP\vVe.t޷h~*Q֍a|[-籈9a\*o7?De ʓ LtP)$W/ˇ^R/AAKԣ42"ꄌFxTrh@:ERT(DU/U6F)D;MM.x\.)K$)P5^& 1ͯAϧWuYGfTSNspMiuĒa-u bY n얩[)ԓy=g-Akd 핲?_>eosNJE#>fu!z6ͥ196y _:wU@& 1h=蘲&/ǜ1dߚwvq7e*_joMblE*{|4;diDQ>59\vAazT{JlO ovf(T5 \mVX=Q-uV߶@jBi%d2kRTYfĸqEMs[Q:Ɂ]rwӆƣX满v][텾Fh>fa"p@eIj'"8o"lKamgϵg+%%Md'hP%$ЗZiu@,lZ[flfT[B?½O6f_ UދwW}l/N9XI3LJ;Qau4YEH=C8Bmg i,2/:(Mm],S]pOP`K"+[b \ .biSvhlhGTcYѓ78xeƁɱ@*R>[tTދ]rwl>-c#ƾXR!2e(INH ){ KV:M>g8R͈i#F@0W(9rVd%+)k=@T c,XͧW-ʍ:#sJ]- @WьKF?ԇǎecٳF;e3Ԣl+)Ձ(GV=yV!.[p)1.D;aD>vɂoi9v_xIhGz"bqʺc4z/cͨdba%M698wo2zoUk»s7>|ʴ[&t?}(/4FF8#k\hώXъݢ["fS*%X@ُ}by^G9.QGL?M6P'Ll*IK xMI1j'L]֐(eRNڶC^kD%ImuI֓8JAI%p@h9?C^[C!(vM}9+ڬg$]BzyS|Hr$c,&er&(q7&'H BVR kdԈftC)fU-4P@PB !X1rAv6sm۰EQv ɊLC-,vBPAyr.m3l<_IctށRb1eȀñYFbRQ,8W݈( G1`zo҉'V.d\\^Yx;ީCE3GhbRRԅrYI c4(xa}QxFaٮ!졕<|1|;C ZmN"KEeբ9 @\Բ2uE?n7 T[fTt=Um^Ϫc|s}tOe _nHDBY%$&J6q>HTܳyUuׅ_ P1;v{AK1N;,k^*ּty[(dVf 6 EA)R11N "Dt \-ǔ&'Z!>lof^0䁄0,@);x>Y?)`'ʗSO>Vaob,}ʻ~~ׅ}~~VHwd-zJW\ߟy҂ٯIiriθ%BdS&2[g_B>_=ysPo[.>FP@>C/ys/fY[kl{۰vu2iq~keX=իE ~zz1[?8Yfw4'|݀Z6N>_^OoD^~d<, ? yl>w%9vVed]PuXKVQ1<ט019~pua@uam .,D%s%D Fǀ H B)>ADH*.J_[G_SJC-ژ6B(Ahui6gх]R.Wm~<M]VqKOgmdَ$0u3ׇ AXLs:9Mn8ިաI ,vX i&:ڮ̎E]]Hc;c2wI"g&*rS),uAɨuNUVNKi;(0\qᴫ_+&X 3JQ{adL,5 *٘w6g!@׳S}uy~?!ev˩#;<tl-!J!RvAq6hI3}fPиԦW9R΢:!Eֆ,G&8f @h-J}~MAxH)8ք9-QY*ZS&) cRli(V&tC]}EXo$q>eKR:J*1rC Px]EA7OUEѐ0FhH(u@ iWFp8Vu7Ʀlkzo[ &U7 96GmSPGmSUXmmSck̶1i="sأ1WU\\F\U)RGsz̕u`ĭZ߸jc|۰Tn&d.b>8'!r8y5>=;܉I)};Xciت;+:9t3] z4ӯL{!1GdX`uD檊k*mC7W,#|*ԂpD6Gd/72WUZC7WUʑ]NseGJ6g|uU/L?dmQmh`2Π.u)y L־dA-RP$ѽi)Z/Mm61e'w9KW.Sϟ///naM֋owOް9m{n2Z8ݛ!Ţ Τ5Ifc,"4Nm&ΎIRsǴT۫ǥ_vch}+͌]_y/]9OYAr̞s25*\=Uv!e)ec81uض%HS`)HiC2I[lK%Yy9 3ȠbҺk3q#^5$ۧxzy9/ro^;Z|hp"䆘O| ![&B#K;!/?^tk.3mqđ M|WD]bTBCv#_iGEm[ZvOg/p2 Igi5wxn7-M"a_+~/]mnZ~[eq:(zx?{KY;ɦƺ,{4^YNXt]p|Yr//xci糶 5[6f q#m3"8*%*QKQUuʚ%I$ fJjGgF~n \|1wxyZ/wzh_aDjnժlpp sNd="tX#(^v5yŘuKǴAw^-NuۄzKmMvĺma*'?zVqG+7$+wg |tv*gg#ٜ>sk^ٞe XdW.f[9ӕh厗_ѺӖ*9*D v2ʭ|h9;}%u@925qxm4}*m# ]6R&He$D2 F%0>PE+'h;ao얘 G4zfzF7釳:tcUg&x>\qhH3Ϛijw"sujP_Vi|j?  S–.b]1h | -}}֣sj{ʿvtWg)2ghU ^YG9 P"1Jb)#DV吢l\!ä qغ|qxQ#s<$a ZQ QxWH1/_̎adC"ڰ->֠ޝA8ĬY̡tr]q^BiHRd զ"Bq1{jJ}8 ~ٳ(BSH('ئ56B)k$ʛQ  3J#$(rBFc[Զ"@0y3޵5·y pUg=UٗR.cԒE15$%A"%ʺxf40ݍn^B/Hu!n,6OP2y߷)Ӕ)3x,ٖی~pe2$kVZd.'h) 'n6K~tu28I#,~fqU&c;zmɻĬf0M.W+apP9/S8}itҵJ}bPȟW[j7oNA@9|wz\$?Q#ЦONa"ky?1ӃHm]u\|y&0P\q s+n8OEޝzrGig`'e:{uwO]݈n:d_IcOZD3Xfj5ѳ>UkXA0ֽBZHXm2 iR IuuO`y}%9Ҩ&עxq{Ď~_o?xӯï?O4sY)tO?E` ໮{tmjuM{t-EQ/'|~;^|@_/˄5 )~8u8[Ԃ]v;?ez$xXWQH#D6źPX6RT)YkElUe#f#݀݀@'SىώMb}tM/t?m9IN$XN{c\g9{/ lʌr)H荤6t^ӊϷc8y\6J1>wŸ5 BFs,h,d躑>(oT9xjd;% FYLd҅.tYb"Ι+2"XƵ֢9`u(ecDoFu04ӠRG/iPYzX(؝r{MzܣeS__?O , onܯ|DgU: [{k,Sm'z7^Wo /e2q: $'/k4gbw(!=pGV@g`=J*)! K:[ZjHq#01I'N'usuL_<)M=_?a,Ax` aRp:HM>YAk)qvMV o] i \<_Zi@]odߐԓ7q40[[71gi|VH# c3kguc]MQ. Loܼ*MFZ+%8ID2w E ڨFIvaȨUbΣul uQTc4Wyc_l{u eQ Qi1!BJı}>VTA(`EߠR TOTYyd}E?-] KNJq#CIF)a<2J%7 :gUqR)RR Ke \g@"b{Chd퍫Fnoi憆>&~_RG`֔8q勯I!|[/ࣇ:ļ4ۧ_\^z&C =T9嬅qO/M/aƞ ^ƀ($EĽɪsε 1 7pͲ$]P^\JZe =rRy_{&.2њeetV+texY=/tk%q~Q{Mnr^,[؜qwTvZTjK-жY@rr޿mǝS-"Xk.]B| A!8٪UyY<]~E:gZ[;vm]רjy0LBڒ8[$;.],no<] ƨXjG+]7cߒovv>gϗڡ蛖[o6d..LF7jYYU/k^U!FE AXփ Gf2&ȗx:y`S[y}*/NʋEN=aއD`}oUX}5.u@E&Yk?ؓsPI%YRh0bF3uO$`ƊklCR e_19xHIn k7a FnMg%P3%4Mo;ώ!?[5>y.>E}Bʑk" ҂-3EJ+[R$gFf4pJ1gvP򶦺5c?1f021[/KsgR\KǹII֌٭jP]u!Iu-8$}v5 Ѭ{4n4| gWդ8.b4ҰL(Jnx4Gf}&DAvuEJjh`)MM5D#l4=6 !e ֎En'a\֮ڶֶ6N[H:sFr 4Ɯ4VyViAKsl@d OiU Fok(u6qL@nClj|BxMd$S )iVy-HFl6֤Hhg]٬(\@a^a֍cxtI&g7^Iz]1U3;NPtfr94~g5mW\{tt(td/\!T+Zx)Z+*9~Jrd&~wY\) 4娆MgQ|E Hk4 ݩަ{xƱ^n svwڽg}rJ/r?Qrݎd"<%ZLJlƍh^ҴXWP(p7yѰL?CQJ=S\=S\=SN܁̓mC2)!*pѠ2("8C&C\JrZ蘜aN!mx3#μ#m/EL÷{ρ5Jp6^- /5F nece-Cb >r6s32t#.2-vs?g5"nl `@LWWCXVdDfIlz,5Qy:61+ـhM@ I" cqu ˣy@κ S*jN23D[B2$E AHWumtY`F⧷pY)!2M'JpaWƝ#Ӭ~.'aPGD{Z(il+}Ŗֺ}3 98θƲHx'=J0T(JܛK J27zk =!KRBT*D9G"Tj.{rL?d qBΖ޺LAFS*$@e!IaY$&utY.8!1 1I^x\k1%Sd(d)"76t\T>XZhfWЯF$c ndVEi#l Eų q&ZtobedlB@#C[~.U[A;@@݁$fZ A\b jmda|>8rGG㽣#f }Cb>fVkM9@hiFH >nl3\u^l )DJw2f0bIӆP;`5r($@:h}Y~=fY*]/n,3\ds( &ro,$oDX)4Wԭ)sm[Mre'zDkuN^B&rndk!u 4 9FQ{ ([۱N ȩ deRY!0̘AZғdrV=嬕o@ZFmcT},>EM#S3G<(mF[(UF!beDeOaz)_ /z;1 r`"E,$@JKh](-DguMлI]X/$MYnYOI HMNlTW=DU\%Q0K{*RC7(XijN lVtpUųk^5Rj3+d@\UUѹj6޳*% \Q3**s*=\ g-•6@tNp;)lJpUۄ+\VV/w*ҿ_<ΪPnQO+G c&I{^M.nyOsēuq3[ Vy_h6P1SNa1Q?)]YA,yb/ym] xefU;o-vwbHL i`kq&'~YS f]A7ws e/ǚ oW77M% ,b\C]?*-y2`0  +d(vYV bDUAqEtJСws>N |IQ"f, 3{ؙ][s9۔c1Fs'z+'7l䪔zxz򜂦|64ҒUzptMjTAK:ſ?ǿ_(Lb8,r?n?ORu?DNlA 6NEy:rg? хIH $kM#\שּׂWZĽ!Ŀ,Umj2~gRJb%;dw aMVtB.VBbvN؂5uPR`QE8o}RJ3kuE^vZnĘn4?. }yN[=oU]KViuʼ=n?B!tVhSUǏw4[vuǪ+]WFJK9`eJŨ J-.lZiOhBRnza(SBH6s.jf"RRgtLœ#غFo3q6_cLTMx:[vԖm&_Ԫ^曰8! "?~4L1cڪ*S0XGF xe/l15KVl:n?bp4WJ5FSX^.=Tqbi-ɾV)WNs[NQБ1FSMư^3}fa>_}FWw3\zV,~{}ԃqm?Q;*3?E=peckPo18IX]+IY`Zr.УT@sɀg<_?c!Fi|Q/ܮ{gy~xm~3>MknsCU{ƹ}YyfqUۧ|lO^ﺱ,[n9}O=Jjm WG᷋W:`ZJPsY Fk-J\,IH肬[^UDJrT)tx=*;iPTϼ)zvTR,Z!Jf\t,iG]$JEpG|NSsA,8km,2?5{PA3q60BwOzc|MG]8|goh]XgMEZas *C$3"Rr* | )^fl34X^Oo܌헇|ed3:,gmM,ɣϷ|nM/i4ޘ}g%evC׋Ux7YkS hqg߻_]^?ɏ,R9Z_dCm6|zi:ں}/=ozxsȃ-z^i>܎/_ǼΏv ϿhmGB \tZ k{g70# t.6'͍oGww}p7E\]E/2`Fvy2ݯy9yԫ#V|56}h#39]6veX_($ 1=Xr„eJ|^=`H> %R())e@J^Ed3&F a^R(XſR2BndEЁR0Fx#8[k+[^BM^n';m\P-6*ӗ=v{|-=^IQy:Bd} *$ʚ\zԑr*>ĐESd֔Jf"֦@2h% 2l)MN0HFflFƗYi;Ҍ]6B;`AC=pɗI&AGj2;:͘)L` (EA1XKH…$M2>Z6a1-cOR J1hLFd {º"et\D4ud+q6#v_/L;vEm݀}IwYlH q)$-D(}/էfZ+.Bc\\5dg}tyh|gRw>'{Jk8N\OuҢ=MڿI_i+!߮SJ'H s]iYN7_ 9s=ѳ!7PN偒YJ4 H)F&wO5wUd]R0cB,vFBK(R N\ي%dDS1Rgrkpb Zi$ - Q\{8'gfS@NlG?Láũ1 (z!"&YL"eJlmĒF:7Ռjh?x{ AYc!c%E0uo2B9/\*Z)$ pL7iZĎWVw[\M[̀JuQQ!J"MФDV{[&^-xaaݎvwslߟOTdWG < DSV#5F:ɣsMex%/%Hg@z!j@6A&ua~:*7fMʣ ҵ)3"QP CɉM<%>fZc&sɷC0dD;iЋd' ?(cұohXtBLї ;T@| Prd)* \0i)MK?E>Łw̛-*;RƿJ8Q@}ND}#Aͩ .&/fs}?}hq߈؉MCpsҚbCΡ K ,v= ,vMLPQKd#<$̄K$$0hSҵvK2Đ`ӥfl]H5(8I:)b+q5>13LO?,x|FW%oVe5[c?˶j5Tc3+Jbz"jI|ufhu-[~c&QS bB,<ǨTHS5,DI3w’REhLTxi炓62ŤT Ym-EqKa 7NAL JZAqDL sИw6g[IS|Uq,: @).DDlRP0/sZ}sEYHPyC5NxfZ%ka1$U(Fj;UEy34jŬhPCmD&PY jC=ܨJ FMLT~b괭\ ܁4HqDe2MjܱQQPrٕQxeXBЦqh)gUYџB Ƀ0yMZXZD OL3ڽs*.PX[Q)Cm%(at\V54rg4x`OgmZ,'Iw9LuCIV j;Ralye<)hU+/Rb{{`N|jBǃ-QRDHYx&BLN;Pjyu9V[3QkdTf:ZP1ԥ@a?fK_\eɁjܲ}K{"})uv u,%@ L.@5Ύ|vj)Wt+_yf܊\*5 O.Boל@o9š n[-t0|wz_!m [p]D3J4ͼz_{.+$ 򿃲YC{%%^MG7 qC 40Ι&(sU"!8,Yֳ&& @>SS -oKzآ&3auDD Vz5!v(ԣDjjsm I&x\Yʜzoq6q{.ӭi58\{ySNUI9mS|- S fdޱٰ6ֆef  syڌދ:6a;R)"ǁY3\x44*- DI}}Vn.CSt&sUv;#ކ' F_]ԐZpƜJLBF0B; ,1:c< v >5ߦB֍ƎB#ϙզV߳r.BAJISmq>ǼkCQȺL("SaC"E=Smhymh[ rGHǮ/8 ף>J?x5( FӤ L!gLBH KdγqVRFI@]vy.3;j^˝gʎ) )*#h"h"X$ x^%VMfӷ ~i6gL:Of JJSV9 u"!DEr K4xq˅XAP0 pT0E"5ғA$ p%? wUWǠH6zoІ!EN+ *o" QZej/#f i7g+q3ɺQ  qN#y`箑1F)=t?>|Mqigv/wzw?ow)~yOY1ԁ@{<7=bh8|ha9krֳf\k}H> oEbeBWorZiY [9V۳Gl$i~"{E_C%PZ@BB47bW JT wc{׾Xk&:Dbp#ɲJ\rdtJh [I"# h(HzjoÜFC7#s|WIS/5*ꀁ RuTRw:l*11jyx]ﳵ}LN]_z%VQyDXHu4\ X#\<"B+#2?]!aW\F]ehή2v+A$̥>vXB *C)XǮ^ ,K+X13fͱ avTcW/])-9-9ܖq!?:r1>YU~.􅱂A3&ӦWs/g((d#3*h%vxzy+d#Ѱ .caZΦ3K~M62 >a/v2zvбȮ I] "Gî2 2z#8CI;E+ŖYue?O/_/I3{+>B  R9Cz[BIz&AdʀHL$F(06Oߒ3:WEvHZ@ bY|taJvP%pFY9+1g-Okh/Wof wI%b'3Zuje'@rH]*,/g0r:+7m]ki@)Dfv.4Rwj΁Jժ#_GD<w34@q؛`,0D:my}:R\Of}Uh$ +]+4EOPt2u6([KPv:MNӢB,#hJ CԬ ?e*ʾV8hZPN֩TI?+t;wt;At;Ҽ+PNQZYԛECΒ" MHYStR95^d Qo!scc I2& O{n>1cJ*S޸C]mr9ΚҭeǕS9L6`BjN|&1bij= ` V8p3V{>Jݲ%qzgŴqnԀ5,(O 0N:@Xdʚ>5:k VPְj5d@@O##NhQJ%Q@u*$ "S4FaQ4NScp@ :E Dy &g Fd3Ɇ27vp2MפI qYo grZgX;p!WWޔxUro]mL ozRV{[r/c]n[uR')>[غn]L[sXntfC[Uز [v>g=/\hMӤyGm{ f~^mg&#5ӟ76|}emҝŔΝwe˭s%>+Qwh|9c8{֙/d 곯Yk|!.f׬" IsfܞBy;Us8Rk J' 傓9F S+InX uADHG4ςO!Z橏nC*N)}îL<cb rÓk,+1Mʙ/cs^+h9W|V΂<h0\e"hS ҋRBlG^$rFU !9G(tH2d=k 8l :%*I% TӜ1p6sZʳ0cW^慺w6&'AY F_rslo$$QBPc\(x"Zx z$l!Yl5v &Kpb&&10L>&n:Ⱥ)p6sl7%-\1ؕkvxw8Q&JAFXd-Y"|Ex._PԒEjFj,(6I.ݍ\Ki AIPJXxQfɚU(C&dH"(ѐIȢ(lGR&q#%hn:Gjۤ~gRc_ <JAS6Bg0i-BiVc,PJdWwб8 YMJ8unfl=yDn]uKf}> \qѬao8jitLH3H/fW5h,痓 %ȋ&  Hg4RQWHz|+]ij/@ƒ c7lFmr1#QDwR=R ‹#qx1T~$-Sʲ Rq)A`XKgg Λ O;7+~lg tjRV CMeBҠH&v~UUwsٻ]GDxQmV*iL[€L.8qT1IJ9 $&".y,2#::O' IxLh-S HDc`5q6Z8~q=_GIƑ4% ؜>&fS[lMY#\QB4HcL̋ -LLReRQVQ*+1#r!X}Mo d.B[i:PpRTV$ReS9G'K7n)eC1y.1$4fbdVkl .LRlR;J BZtD敱!r}`,YaM>;gب4~I`jY|!ư ndJ%e ų s&V xʃ˖=ivu9х<ƞaf/ BRTi/FE_&pi7Uh 胞@:v`rpZ JaA'B$%$@Sf;xQ ^jk ǘ/s9hR hc aŒ'0AA OcmvjG>OTAv_;RGa;(xD#C90,F9GB |tP#wR5WWF˭fzNXs:cI;Dp^Y ڒ 9%ШVSLgK# *-hDEc:"K!%NVjhg]Y}J25)s|PGŵhMJmN[ Y6Y+64`5Mk>O? /XᙁUY TEhm, .oV5!O'T]e2)#JrܔMK0g̥L߳1(((8M Z:@ݑ<:]ղ Fq{%aT/8r2c& i%JYuT~ZQ3K `mFDH9ҷ0hHkCOU6C49iI,rXpgځ1/yFsraylXTӐZ>uAЫE O蒦d[]_d aXlLMbleٗh L2dDalD2} :ny ׈z*B6Ԅl kh%hD>7D%"eJ95YFEgHrʺn" yL,+ T(wژE Y.tSKM/'BG͂4ry7z`=癠Qԇ>S$w6^,Nw}~8(l kBsianA>.,-3ٓŮG ]ŏ:G1@Iec䐔f)AA|)puG]%nK XoĐ.rNRnO%B)Df vZͻ}B, 1sz@W_K\(#[]E6)qۗ.-.-]0+]3g«iCEVr^1ʥ&[`:7>ed#w+VMBݝP&ٽ]Oɹ;^T=s'gy6`teOMv)BuN3}sQ.xK9J݊og/BnN;^wwW|ﵶB(k¸q@'fMFyO?h: ]p.%:JInkz'eSF %sci`XNIe(ߴЇ GOZbS"ďS\$&A9EaIW0㠨,cTdEʪAEz5:cuBgiC R-A)I i:ͽqK !=db@76OH&i-ɜ#9xLKzR/TWGwr՟R䮌z+?- (Zv,nF4n(7-߽+S|')q}"QOjH24HE#pK@b2qrNΌx o'%9f}njH2%Kk9X]\4lMߨ#ZA8]9ן/o30htѥoH{Ũ?n~ۭP#8k3p,NAWQh(-ʼns 8| ʳ 0F=2HCu_=jy>a" (i\<V$Mg_m~ NuLgKno颫݌fv#YqOԃKM'6ns|1^{8:[ljYA0Ŷ.lnl4H}2rkr YwuOh?ܔ|M5n6N?}? ß>|{8Xt%=B_Ѵ4iZZ8i)j>i|v4!Mfq[藫OӟNj_fB;S&j T&돐d!-ZrS*E0k+UH SYlH7`h'2=|1% c$/0 0 BBȅ;;s]R03M*DŽ(%JbHzazȐݡL$eHU<*ƝɅ9 T1E5spF0SeRW[+٪t}[qհ~y/9?#Xyr&,D0hybܚU%T?(g0RL^L'(K@Pd,8Ƶ2P;5A/avqPq)I~!f9|w/ݶ[{?@zY8R RUkrǓNjٴ/K =34Li݀@zRI#ha|eJ4q)e傰\`B%AdYV VX.0a FF$ D>4z;%%ԽBjm<w Hm Wlz}T|{9x>yCnˇ&Z+}mċA]YQZ)=wߝ..I7 wY^h}Eq)ݹgAVo tXV,Vjb-Jxo{ڄu-;uN.B@'z(Z&sra兡0p<4pRal eyq9*8ز2< FP6Y$UB U]&40!fď_#Y.#uuW] 8wkKBRW=CR,J8(3_UWWȕE~mtzz|R(1_^jE _8p`A%^sP54H `f^5_jˉG^s癆bD~JV#{|}|#/?~<.\O<$ e RLJfQBm_߯[D2Ӆ&.(.5$GFo~]At2 -AnXh~T?OnDվ`q&oƲD-VmZ |:c՚m*ګiOWOWM'F%bWqavH]ʚQWD+P&*T⠮^BQR쐺*5vB]@=uUwuU0$.t^l2]{4o׌dҽ,]5;>75QqNȬ::ǚ?dL+,;h B팢-a+B}WJ-EBa!uU!\B`}WWJ䃺zJ++/mgc=)Cuu{l.A Uyvxͱb@&gU'49D0-o BK|tlؽqϫM@V#iwZVS b:GhԄdq5 $ >.ܦ>oݳK/EW##Q\iS-P0! pI+7Vm9j 0ѐ.H dFNomTDlT҃;o|񯝑sD&R2DwR.Q%< V-iټLD}T4y]E;K]I|8D>OtF z`.ag܅\ev]5.%*Z~9~n-Avm==Mw`}k[^Jx6݊td?8ԩ_k| T֦IĮ-W[|& P^p5S:JQ:f)j^8UkRģc7 zWF訄f g2*g=L 94PJ[H > de %{#("S2E%y4ҥȡ뚐]s?hm&'pq{/T^B(PO#MB &6\^|ME|˭իS?y4 @ Ph}Ж@£!QŸ:bBfbީ0僐K(PӬ8C2-"sFZ ĠD9G"v,+sϊ5_B>I pI,k)-ͺHADR2CaDoIh83*'҅zB7!Ւcd*$6>.'-9LĶ 7:;m)e,)SEaDM Qd,kNƑ(l,,L'$9iR*12#PR 9dEW^Vq ܁VXd-v A L =Gfn|>칣O=r.P`b>fíR sym]F6 [֠ֈBTl&eL"YY5XK&_p];/"nXth2Dʶt=Œwh˛_Tů3\9EЀr!q= @f O I-w\9."1֪u :i *JscDZ7nLw&ge'4\Y j rL@4Kd6'YgS7 fˢ0j)YRHt^ 栕D5<+#Fƈhx=DKBQCO?/ 28L>8 1s%4.d#,F= P3'4IOߦfFSv1j,2$s,6 H[똍~fsU+Md(G'W: ZdwQ+Q-n|eZ:aT!A)s8$˔9)r[Hb?%i6!m<"D4ե8M/E @ P)^ eCz%kfdBpI T8तB$1#VMGi A)fbMshjR̙4. T)"}*AS<9LKy@۪,[WlV&اgS neU$ۼCB8Rer_>6^w4ZVTT|t%~5+o=0d%){glH+!@=7Sr7^\g|~v@ JaVĜϣq>c+خ9d)؍W{ƯM-I%7tYg$1'~,XVb9г6Gm3vc겓ZWLU2R>LǤIڇ=&v?$緼TO޾l4noWatvL߿|?÷7F`N:+ŽM$$mAMO_Ѵijoٴ0xJ]]z1it ~)>.Ƙ?G6_BS6.Wv, 7n/>BdF3ZW4T:U`֊pb3[̀ـ[ 'ܔF[mjԱa#2)}LKlMdD2'J)dYS %>IOmXfYPQ}&l"D*չu\Ds,`VY>H1ltӫ7rT![;FN\_UC3/!SCΉ``vg r9ߕBZd}/t^r(t W=hlrEs{nG67KGuᡜHu5fk/3^'36*odݭyivK &IdBFd:gE4h()5Z8T?߲Ffpt3s们cjs} d}X,S#> Ω $u l/s\٩t0'Ǵ5JdSdB k0#=? ޿SGu/sɘh ȳ(s&9 \jGQ8ov(Q)C2cԖhSUUrF$0sَw7:#瞻vSB}6|A-2ʽ6}Ze6o+; vŚԇ>p $ޣ j7RRRnVB{McvBN%^*7Y,ցKIVgHDn:֜ˠ#"uȹ[oY}c09N&)/!,0,6XvO[Yrr+m$˔-%r7EVb=!2*YJ*Vr%]2k<ڑHBL^A5gx6bbhI>E ]p:3AE7ɇڛQyܼ9EH9v%2gA߭ҹvM䫏1msɬݬ+Ku 3ߪ,Nn:9H r @D jR<,82Gb% zVBn }ws8_lWkgNޭfy~rݸbe;\!`vryϲ$EN_X:xfX[;.L-,UbK&}@lDZmK,ӳF|YNiUr;6\suoI -wLT8i:SfDz{ Edޱٰ6{9 $?瓆eZ&hj3{/57tmv;1GS6uE^f@x6`ţy,TZZ3%IvE_~~&@92Uw o{߁\v/!y4&2ግTY'ceh`i0́K,$BNG Ex^|Bkyy-G8(uA߳r.Bzzw.:%>~|,`Q*cRW`, SgCDmkB9Նp [$4hBRTrkg2%IE+!sK$!JʘT&H@xF>e>eO[Wi hqvu%"=/\O{󻛣of|ϏicnOY%`=-=9ymjlNT>lAg?G%vh /-YnmBPlz표.sItLc9]"tL%522Ff̞m24xv:?u#>^ah^ X%ZP9oB is7U!niEn/='/WmomKQyV+K!Hh6NKT *3=),$Ι4BGKBt#e1QB"i͟BS($i T:RDžblxjFwU? b|rAŝ"wUỢZy p #DР!,H/bH)ڐ;˦HR9Z)CZcƠSbTrKPI֌٬ףх8cW] B½IXx&qђ3{dPOAeїA|:j`@Y u+PODEi ^[H֖ n!60²Ry NBP#$&DҥKYc0,qǮZڦ]@w8Q&JPAF!JM"k xfeeaiA-dDh-yTkB$^ȴQGBBr=Qg#g>lyE#5"q_;؆<ʵM|ieRg#΢VsA$˦M 685I* k(Q#)`(nϜ95○ڡ^qYKvՋ^^ZOP A 45, !hAli2U&);x0wYa}vӇ{PaW ~daV9űx8ʗrjy'W *2&\P"u&OBi+I S)Q[-;72Гrd哋% B7Ns#7" hke˄V Љ KG}vR{uxt`%kخerz |/upe1Ɨ j|S0Cx 13JURhbQȭ½S y<zd4i- Mw ޷6+QbT;o4Ag6l#l U?r {iZMu=:۲͝vEZZ_m4'A.^CE&.C/eCl`'o$7Z{dN۫ ?G_-~L:}-=.͔K3]<19DSi0ö%m0;FŶKڈ%BGBp{V*橘S*)•a_iIBCbJ„ 15BC"b|D[, YaW؉V-Qqpqnaf#[-N^XJ4)7nH2!"BH KdγqVRFI@PR6#˅fB)jciYm\DZEt!GCBblYKeϡbw:) o+^:]C)omly-w׸.y) h( DyZ W(vP*l*_uPukShl'"*5qFtpy3e,=w.'wϐ;wuNFC oF 儝\s%9ܟغnpsROP^āf?$rUBcZ|m٘˽ q5THXnp"C;<9(8JudC =/xӻiZq۬F N.N/UA}jChN5ţk\'.r-ޝgM R٪/͍דم7l1 ^sii?-7sI]O-7vA4r$GluefyRp'l·W|q%1f~7o__O0ٛy.dU-~\Y]۳ (Bi UO@_Fh>;bi ,!=ُ|λZ.x?l<7a$Y]Y R2Q&SuY4A` -$|4Xg$=n\Uq;^}4tD}*iF%AޑJJb 1w^ΩSĩSաT[(;V[Xvԗ0oh]];ḅus \E_E_QLvWLR aY"EȕȫH!UpJL<̩o*b!c}~G:8}%(atDc4kh@ Rpv O/㾁y#6=ıN=ɩ-d՝0崀-VPz;9 Yqņ볣TA(*"Q@>p Hh#D<#DU`/EK:b@:]0MY Qp`XN3I )uCADOdsFPHڅho*lJ,h5T"=6,FΖcg}u5}9DjwܧWyjv\f_תߥ\L,zA<_11i< ̣ [WyJUIF ~! W,_hKÖoTDlҏ9o/;xuݪz>ewٰ5Mc-ޡϥSLfg!;ӺWY:Au/E>O jI䊯 m-/ #0<0bsGe y(b=s!pd#sYhDj@-u@c\A|gC9ۜAgBӻt"+xȈ,; lHxy"ΥVRM6EU6ڦ"bA2fqy ]F!(kcэXgc(Bn9q9h!P֟Ux|y O1Mں{'uo#dzz]*`łh\RU8Hs\!0Mf,V8j}7?ɪ{'seZl}G.y2_MBN \tO0 $=%S"pL,'m\K3}]y"- 7׳#.ˍb|wdztݳ˰َ9s+]-M#zŪ@Æ{yv;&1y?]geBL-?"Ok/\#Mk䗇ċ7}2 F'_3V6^ʵ81@l˽ +K5g'b{ɻ <]2 jb" !#ci.bq-08bP˥0P#uAS};Tnkq=J=$!#1mϯ~#r+//hL HJp\iWI6/Zb.*+_9[ zgk&4WoLLd9Ј_ 8GB9w.~ybɷO'9xlPslXݗzs?ωb73K6~`Vg;7Ӌ|-\7NkgU`Lޭ~{әƳϗ7[Q[ǐ%Wz'*OZRQP־f`4@4y[9f3)ǒF #$>aIV#搢eYlJ1)UF>XXkHNծƸuNкX~YUWB*dTҬj^&nc\z _@16ҖWmmOcMGeY/ m~K>F,bu[E͛~oxwUQ$Oj`4@(<02tԊ0˱bX1D+ffBJQ$@a+'爵hS!b@{ Auۗ[ɿjHdm94@Ѧ֬cl ^e7q_`K w-°;oezj^u~ wy-QW;l$xz ?|N>xKwrh0&p=f?jxWom1qxw󛰻v}m ymЊ^nouvwr//~ 8u_1fض#G@]Ksuy0{7ǨY@y5m͟6 pѠy 1NlWLv+|եl) BaZOLCP֏&i95}u&AE*2 -W ZV%ɾTڑ|V&LZ8RUFɾOȒ?4?wķN0QCD\ʪUcK5Zk08*ZPRԷvQYh@ JV>;>Mҩx#"FvW:+j7qv+jyۢ膩2w:nTwws}Y ʻꜲ&SL:l'+BYkUX(J%@LZs+pKa|͹2rDϵQX2 їmAR8W醅VơXHF,d/ q249]pN4 mr5U,I֊z >&PcgV ؋("ii'FބvB5 T<C%nˋfb jw[vaDn[wpU+{pb)YpqD .F%6D/VMLNX [ul5J(Q0JDUMbT{&n< OT_D\7"∈8@UCUAQ[Sc3,C>fE1R% ŅI}UiЃŁաVx g8s"͂B" DF!N?+Ypq\:%_gUr(.θG\qq늯BB*9@VSC DtEL,|D#.O{CtCs:w?/*q[DяFyG?P^{t8!TnT+Q2^>yOޑkAьЃ|e#Xr+Iե3AbELc>'Vjv*w wRoleǦ\3 |PFI7MF{KZ.A R6vQ%{{ +6jidև<^ ~hbW6Ѿy":|^->O[Fwz?f-9#Ë|j1bCXs2c׃WPc \(B+J~bWcX_ %7/Y{ B9ZۖpiD21j蛋z,FUh^iU漦,a*S 21PC}<7[E7Ǒ=;BQO&pϦ'+iӤcO'Z9n\Q*W! $W6] 㡚b*; ;vmtM2<苵T.<|].d>>;ltZ?MS|V:ZOdEO)|6ܘ^N&e:,'|0J|v̀y6P%bO1˱jwX=*'T*l3>sc` 頢q0I#KŚX-T 9VIy c Qz0蔵"%6NIЙL.ltgg7q(6Wy `O뿄z_7IΓ\iwkdbëz@[-SSo9jZik'TM<IG{/f %[We9ud:Qʧ<5L!G$: 9ײ@s Aٳ4~pMzY 5{UDA-fF͔90b=|cٔ_[r`XA&zwZISIk_ذhn0*Ȑ1`Jeeot;贤"l-r";jQvkyH&Cb(LXU T/e/z~7@&*UpQ5)  \1`)D j-4@Tj5ǣg=RnT+]OF;iI@-6Ta"1jN7& qQ\}Fe8Ǭ\\L^kzjY^=a#V+Ug[ROFsQN:٫IGEb#"L@ʃ>氽ȡ:+9:RJs5ruPRiRT>Z]5,tWhtwT[2vRm仭C?f6'Ȑa>dX5C̉D?Y$&Ol)=~m8DK0p:d/   *8TbcCcds]=ZA6XtgZui^5nntvܦS:QRX I@k!H+9&جU E84Q nPHsʱ<%bbDPl)HV97R/(F 38{DL^ҙy>vmӃqRlۯ4ȹg5]\>lMSzqYj뤣BqtdnEsb/.,ewYqpISh`k)TI Xd8i]M͐YR)=LͤP.JIo;{P(sxSR B,sZ=z|AЕUqF(YRh+63Se+$u$f :̜s/7l[ K}YP'}JF0mqf䛢O7يξ@<*dg`6V/"PZZٮ>'TBJi)ƨSs' uHk ZȒ"ĦAjtdf8c53 yp,%56g#=ۃ'n@joO׫˗'zም4&YV֕t `Ƣ Čٖ`m1 %ZcuXГXIMq=lz ऄaiP˵xt(sGt{Ssfs ڲDdQ;@f=&hDSB9Ebcxxؐԛw5[54֔j28GQALYhUa~GP\"̈flD|oKD܇ul$.yKS j9$.j]cU1@s4 +.qǙTRfJ6j D`!a4(sN`XQ']Ҝ:͒E;8.%..qo 4#j[q h4]-! I* dN$K\bW jD645WޔI”]q)|۹Tov.# Y( ģOy/up9, z%Hz8&sd XELm<R8%]%WVEXZp$bJ^3.8d '`->yesN<|*J>T9nځwͻ/s8Au}az(|{Q-qAGP3ё_Z-7osf b |I*)q@@s_Ks~S 7󣈲3M>39? 嚁RKB54vBct`ƻ!ՔYjPg:*qo5=nz_ES$ƈS$YA#AH "i:GW 3:3n6nt}lu/-j72OFchGڹ6N̞?;;+M?t[N;Y/=P_tJ5DP d`3VRR|kw]d^J.-;sYi$zʮ a)6yb=oH,f=1VȻzfDޭX@%r ?ݺVxaOpuT͕L 4d(to K K @j# H+9CdZ5`X3 QhNL^ V:3Nv;Zz_/KJ{!}6AQj}8GG"藍Bq:OnEs.-T6ϮBf!B+!lLĘh TQ  AH !sXd:iI%!"T,fɖPX ^Kʆ)B$Agj!F2m\|&[n]ϓG˗|բy/Z_.Ʋdb|Uf,xBxuh}ҴdzÞ5)4$GwP]6B+μH\uOb!NJbE&1$qs>]W[ NJeӤ^+n}^ >bsZ ڬ3%(.s9']}^۫QSe{.9!:/^L5A A gX&M}]0ޙ  Gfn;̢ Tf[WKEy(LFi M,-J3Onυ7o!Jrr"Ōf߸--qBU%hc KRHٱBsGu* /Ps܃]n2W2},DNBQvgx|^E1 hD>4IFֆ-$5ms)jE%x 24~2ZY.IYIqӯx#A(-[#YFoIIb+b^9]WD}st2X =[ڐƔ$Fi$smt!&I|WO,j >$:[y|z-}/0rL} 7]#a]}qdz=/^xOTk7?@ Y<syeZ\uzp3\Sng;޿\MxvMCK±hŝugX2FGX!ז_ޗ_.znJm l'8'X4nʒZ_jIED2Wt‰@xA/v]s8ݬuM_ߺ@{,P<yx>6L r21[pROSD')&_IlAoc δN7`qϢ(:Ǫ'|R>?߳=sxei՟UBi>N1PEieX;uR#OӓgQCVGٜu5 Z(p{ +DYJYUCN&1&gۤȿORz)r{JDI,iUC%ZՊ`v)H2ᦠlz9Ix@DZrtr]s^R$މhrsWY# ZmٛC8/pR? K`f0Xo(OBWH纁q2lh5ZL0$k%Vokcl^b9Vツ"2o4@G⬌.5a >MnҙZdk@Ĥc1hecReRckRWO*cn? zMF??%} {HDo$_BO[ӗ3(0(FhG'w4ӓgp£84m>$Fo=Q:,~t)乯)ߑvG SBq3vg: x[ooEjdİ6jq#y|g /}R(&_,_8v ]1ykqΫ'/Qw!8λVW?{4-wNO|'4M{?LGyvSu??y}tqƛubv0d_ sOyp޷kDY8YuA"!Z2%2Wt׌oSo3V\' "{ٲmdz·VCVP6 ϟN20:j#rW㳛ίN}~d(?B۷^_ݾT~͟{0>EA(>k/o~Cn޲iMkJԋM~Gis!{`vW>3 ߏ~8|;΋u-B^p::镗o/a]̫=yqN?_ʀfʇBt#[j s"7'e=b|qm|tMm|CDIQ%MѦJu:[%j(BmW(\AK PR mXbNg> u%t(Z7_e6B׸cQre_"P㽧x锍. QӉ{Z_b}O?di(syЫyvҗٿU~iqn;s=S~皏N @.&A/}b=/yG6Nߝ6toPN&{|` |p$̞{BKg_|3:9/-Mz#5Pdʵo?九Oy:0 qx2za4b 1!'WW;д Jm:\p XxBpx:p'Wk!K!\rvO UT_b`)]=OOk{ ?m>\2CЛ]!\7Yh 4|"7DI-3@y9-:ۛ.\S8Y'Zv|z#Jz!Bxx3oPtVk࿵#mhK"AƨxxvӺE^^ݰF_j/q&yڞ$Y# IMW=?(~*.Gꪽ| «#KZnP @r.s{ߣ[zp3p?i}s5|:= 9H&;1;f<;SQZ+[,*IY-RI[iGi"ny3wE=HC?B.I;?5ur{RȻb__-~b'U-,D͖@շ 79N`*kK� J,&xv \P)&jQBr](#i3CGRXؓRY{E/3hVbP[bӮ,,7F L.9 -6^a :`ОB6=VR]J ~;a*I%LePl ’K-OnEE F'+9+`Cۆյqx(U&J8CXtd w%{J.0(<b*>aq_bnHa Y-t*h$ VF Ɉq(aB+h`Ǖ$ÐGKDP\U e LSUN`V (_ YU3Vj^P@6>`, ל !q3F[ j Ȃ@ mF6jd%LrS qkJ@A{Gn*?%( a2Ur9vp`g<"D/,3ФΖZ@*bo*>s>8X#%\f +,A{xh)B6NRPSJHҦl+vN^2ZC܃. P˚'jZ.U:< '~aCE EƒZRI ><ʰ2NOj° ’ DJ#c gstUw4<МB$?2G&p]`a`:839(Éh2@=H D v@D'xfJy׫\csaR3Ccm8|*il9i4]qך>0 Q5@d:޼jmJUΪrKo!n W0p 3 ]$D_ ONr%T` t.Yu0q;X yy[Lt"aO.b.\I:P`k U\uU  6&SFgT`M ,E ,:.P!afJ碦y0d`Gb3pƸʻ5?}^ _;Á :%^XrHFs =dskS(䡻#x=BA# P#HOp'xՀ!w`X#28W/&+`qE bʅs "+fk ; C~& XةqB cX8 T:>Y\j`-r5@\ v4O aF[c|p<5Pk2C)pDX6 TqTV3x / Z6l!& \jKG] F,ЬoRIa;uA%r [56A8 FL(\]`!ީ sq1yOW a^It2eDc5dh04kQxب0 ?n|!JSRQ45V%y({1#3#55"y ~3 \ }dcf/'h?@z%&%У@<ܝH DJ@}DJga*^pz!%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:B%FI $=Q\@(F:F%VL1R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W !vCR`㇣F h;x%v3Rj1H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:^%c`rCRA*p@W(?x%舔@^{CJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)nmu[͏jon/ /{Rl~WD X8$r@%`KP.JIt ¥Mj(th;tBN\;FLA7  ǻB\NWR #+Ōf@tp+kѺ+D鉮4sV6 ]!Z/pn]!QNWfzut(F8#+tcX}2f y)7g}7fc]]x-̊F|7_.ŭj n8oqN Վ ƙ]mk`WY5,Lr)OW d(@>N­_YodX.CϦgkQTMJc||\Ћq84;YiI6PG?2rcPȉKWf' j4u|quˎm =!~FעPjM7U6 %-I&+uAhޡS5ˋ$ldOme>[S`E4JxU F x4F6(pjYTX_Y# 4>`vPsJ٧ʂ!`;yr :sQc~POx@O׉ЕQn,E]]ys+L&`  QGtЕٱ[&wsRt\{􄽅ݴsڷq!SK]|0tvu*QpY_)E MͪLЕ1uJ*7׬T=T{c<;RNrڶ?|MJXQrV‚m}rT7vo7߆y\>8e*ܶ~>TV6ǣ|k^P,m_JӝC:k:Q9@Lmc}PI:]>7Rea}8ΉRfm6R:7%3iWF`?M%K'ݾ;+`=Y ]I\]8~[a wռSn W[]\?ׇtu=gqPם/#i,8OξZ^>ȮpkGxk򪬮]}3wwsflr1 mqC[Vٴ [.ɉ8'3σ@JTix9+WƊ6gX ~~s|*&\xWv,\l|f~7>-z.8!?;[?dK>N\4=:/޴s=8rZ'z~6;@~emm)k[ ]+<&\Otce r2+ vY)Ҳ]7pD<&'g1]#e"s<D:`}!R)F9$1qS$[s`N"C>Uq wwi-w};"t&|\^s} k5"IW12(\0A8j\ˤNb2G+!xLԳzy49!H- by RyOVh,yzatC:CDtq2s RI97H#tb|nFr^U>QS Tgzzb:zݵgw?pz^NQߑ8> ndOR$vpMӯeOd?qk/IXrѴOGKu}gyd80gM/G 1H'4wB/4u97KKOhEHFSsg[}3.lg|%߿#*kUm), bIgNd8=7V;T TEo+w!dݱE2,V^+ e׋/c/n_3o+=,s[l+jV-nu5s+T4M$Ѷ=ӛ{y <9p;݅fϳB~l^sݐ{x>{fLT gd{WAhw?\:6=P{Mpu7>347qƁ+ϽȴP\hJ C+fW1; ^eamQJzRkG@Rrxq/'mԌ jBl3 ʐRnx:||ukS&b:w[ۓtlDLc X[ w.>Vj.RY8%IRbr!&z^gY<+;?To8dZg-XwyXp"ۏjKxᢗ[Z U,\8bz()/JbXBݡfز`Csx7l>h <*}޵5q#·0.!ǛݜݗM*g].4.)J!ʯ?!)S!i GbY L}FAa(+ ࡜QYù4Z(txWuC= U?/-D θd솝]-% 'g4qoY@Z,qO-OiT&0crNxK9WL JŲ~w2~c#?Ot;i7VV͛G q@t\$?S[pw6K}u⒖4{F-+w{ϴ{?[W?W~j/qq&0{BP s?y2\t_vI4^OK?IIΞ@=UW7 l*4c^Fp1tr>WŶW~Ȯ^[^i{]Ҵ#cEY(Ma]Q1WۨQ4Ϸ4I'll<]^%~|~,?痟ﹰ?8XJK|,{|EצVߺkiaۛz5kk>rG/KN )ߍwNHؖ@1&%P$!`YSHK{6i}87æNM^VӉC77k9[;r{僇.O!4ᖎ~)dד>zs5(*D+\FIPXy8s *tMJ`&١[{Wڲ(}uNb32:#BI MB􊐀e:O]F@+2Zpk%1c'X=ɀy g]=|\$VӨl[PGOi:Z}eDkBL-9l0G}wӴٟ,tj9@/zKwK tI/PS!ϳ!fܻ^fS TD`uo!+N8SA{/Og^ d:aH^6(oJ %M=_eHG*'_M \ 3pRph%OI:,,J:;ZjH1i,&8W$3!/9`P|0Ň? 0 \`8'Fd%&'h- m՞1d@딑2yW[WOL\ZItM͉P8Sq߱e ٌh>G'j˕)%6n)F7徾Hwz>ىZ+9sq R4 跀El C::E)&ȮPo]E&Y =x։TEZqJȪ9)!~|NٚEoP)^F1@$s\\HB+;aq SIk<((E6 ZĹR A,Ab7N*EP*)luT,pZ@4Gm̵q# ܳ}r%ZzKyS򌽴+_}^OWz4bbj`Ƽ4$0դ-}-i@F= h! w:#8˿˛7F&\5cSZ6|$6l6#v,6lzD,"J'92$k# 7pͲ$}( Ou|JZeaE]J a dL\d3N.te6\M=/mmKR%1Kr ÷-ۙ mp2*7MO;tgBnmYHvWx`g$ P$ )+9*#f \F;zK{뫻Y~'_n/{ GO{0xGn6zK;\?}tڷ@g}wn~9܉/N\YQdn+DDB#ssOb默7ԇ1WUQц ,!ȊpL1)e7? YSvhvg6gP2IE4)LHzfeM;Lv _{(/zl[9D G&Y̵B0#]TRI)Ju}%`Ɖƙkم巘<$@$FHר"l23PE3xZzLFe&nXT4P[h+B;£{9w^^=d=?\ob&qK @(alQ LV 9Dュ)!u%5 4 (DE) PC4E1RfQӵmz7(f_v58jV V{@48tt=("hyiBlkk˃**q.nL`s9r'R8 Vg-ӥNK]J3R|tvv7vjEe{x`J#ݯb s7D?&Qp q]sQbQ4^%ȴ S &ԭJF!ϐi@$|FhL@Z &# #x@͢EKx"DJIxbqvcLJI!I{ve}iުt]slj%,~WBgg;Laع8DH8`9WҠ @Lj$"+MI4 YlQKˍ* @ҷ>A כ/ƒ ;D?г!#SCGݱDށew8oE损иY?yN7ZـgؠGѰL?Q0zR/eR/eR/e@ A(b9y-W˔,M?U$ᣱ&ˠH Q CF*Z>ya^G͜4f5[VsW0 d'>JYF x.KdOW5F l krDJs"@mҁu-3(c^@ y!)Pһhs'4cN`|1&nZEn˱Wo'\NF 10=x`)jkg ǥ]Rjc bz"WTd6({-E6."$Un(aM}z2س$͚U7K~.6Q!MJ6ɶ,0IDY19)W\(ߥsXf?&9iKęyasDZǓ QʀI̙!6 ,聳4e[IC:"g9d-SZtCdEų M6Z4/yc=cJb}\nߦB[eb3hؽ8V@f̵X"/D\4 ADՠmdVz>蹣hy}|puI4fØ-Y (] BKPG1 0nt:/m)D1&)d10bT!O泚8{ 'dg ;Z_ƙ$[Gqg}v|(~PL('X޵4#e1[ 9a"vf˄wO[cTd{c&Q2e Jr:&(<2?|Hd&'~W%JN䤹ʥsBf-9VTe[wPQKE#5FûF)R*8-q\65YX\cMzUDC9A?xw ziiV&>HU@l8&Ұͬv*KVU] Ol}*KԜWe1Z^(~N/F?]/Fl|bk⼤Xݖ]o;w| 0OlK-{2`K ؖރm%lD51]=9N\Pp_?II(&xƆlРR CSe=׿kݣk݋ b+e'QLH*ˉs -&&nn!trPyOyyS܇Nח?~C(Jb}6܏~kQ֯⺨{,E P6mr¸!ة0 G!J'B92$H/A҆k$Cax{=R* d۔.} r'qYd.s.{g^+jAaYeK[vmA伜ܺ63ŗÅDZ:Vm{k#n2}lH(4 $ dzH(c3k6jZu9`؎IZs"l_ o~%٩Viabͧ,"kZr-3rIr+oiCQ$l΢mym"vE\﷠Oϭ={6Gwq6~k-~'[v8NEu#zz4FlXF׭v b[D!o"c 3*.vw4⻿:P0zIBՁkv},sUb̠8m$aW7۸ cn| W˛Z`2|v.yp* ղl5׾ &?jVY%M}YH&*cRr@B(5`ZXy*7y fȄptC(ϕZw#DK (}uVcqWE>​{%>%r0K=@nEVJ $ъF)ug,Z .{X<P'1J/nȣHy@"ԏ~ܺ(g[?mf_ S$dFr!r_ɪ8An 鸱x̋ xՏB2hKz<%iURB @2:kt&IK` bLU#iY#YrhʀB+Gx EvTպs:&Φ=+!5;v\5+?j̔ӫIj|'m{?>4嚋_}X&J8 &4deRU;d&G.s]iios|@ *_; ,vQe7Go= v2c 7d{7|njІxFi}ZdV-51W_$(Hܵy30^OvK--$ )\;ҧMkl?'{pkﵮ{^׆WznF{jmKC;0GaZF[O~2m]{]%J[}xeI57ԏ+Vd9B]֨)S I0!Qd$J'_|򥼊IBb6BkRFYH +% PAf=Zò{uTV#w. =/:mmUolͰrMqR9$$ fy=!x1?|-vVq0ǘӌim3IR\Iz,IlOe^a |ߑ)rR9PGTB+SVR n̦q5v|,%S&*A&f%7^:d,_ď&&SLUӱrٷtGXetfiq'Y9z)3ӨeNpDkm,O*D)ORdxU7Z`Ff^wozhy"#SOKQ޳j^mo.e`{F[B=hg'FidbomKw_Q6]RYe]´Δ;:8Rj>r 1J4(fTAT} E9kt)q!XIң Ci43N˭hY}=Zw:fSISה̲!IÜҪd:eB@ZFz4> :/i Mz Bw1ԕ3x%9eMc* >b>EMv.[*rC{:@z F32 n'Khp E*J9$T5T< R7nJ'yc=ұRX*v=η| Vq q Gr*a$5Eb%G_5DUh wA}w>#G};Gg/9$cFa^$ym.HxMfẮhkXv0){Id}BoBb+ഡvjPa0OTavgAzэ}Ut_63\ds( &ro,$oĭDN\xͭ2iggQ4ZZ`B! 9FQ{ ( U̱$ȉ# LX&u>+S2H[zYYYϪuzY+wOQkZ?{ƭʔ_Ξ݌K*?d'NmJI=*cۘRe$Z4J,h\Cw:AT4F 8RAGņRc)xmSIpXbB¿I{8qvy3zZkk_dw}[-"J9aLeT<Ĕg4" %8\jM-Ѩ֝Ei2ǁĉޝ]-uOt|#؃:R\`OSA1gZH{,) (ČMB۩4򝭐uCn;^r7VoYPHY)PJJ%H' egl"a`GYvtv'% G⬃k(At,ּK\c%IIOy}"H|GdH9HX6EPQ`AiHp Q6\Xύ$!\bPl0IJO9uzy\BhMKvʼ=[큐ISﳽ9KFoL2BY"zYB7]lw[dtaHqAđGAw.p;R0v。^&6Ox3M4L!WjB(hJ&h0/N=ӴT+lruXޖ.B PJKiU>yf-p\!ӑdž!Q{BLUMbi(@pkNP$! 1dn'MV.P፧:c\k\ "8O` y:{W]zzb.ՇG|Ж|Mhxԩ%Z 7:߲[ɪ`8P`~h)1o;%F$L("SiC"eѷUtk*YwhGmمÖiQMI7O5FY4@JGIzl u~ʠy,\ጮQL2xdd(;Ai2Գ"!J.2(qO:zJqkdb~f]e; @x b*q8^zs1d!$PZYoB3(UXT w " o9CPZeJ/gniKԉV=NMa)!":/y.fGD̈́XC9`.?*~¬R]6# q?ü[V"NZEGG( -^IL9IhBѪ3-`[9EgkQrhf.St`:CRk⏺~\.ǿǓ>?~ɷ?PO结o83ĬHF0wOhZu47okXiκu3૴rKwX)؃+˷`ZGRU'z׬~b>v$|q7UTT}*bւ**8@L8.0T/ Zh {O{P?z4CYD kHEhpV:B$SrE$|ԓX#}mcv0|}znIGsrx)yGB:*)Ā~g:T #EbUmY(\'0..ZXZF OL3eb9̩K/#y vir(A #H>X&2` ܙ*OT@A!T nP)7u޺^@ /hI7la[_|W7öi}Iԣ\\z>I7p͸X8_b=w7dQlp~zX.ԉ <0m/__ktLRY%Йw;;T6wf ^U"̏e) 殶 _5/{;5woߘΡ2Y (ךY +6dM?۝ YP6sW,KcC*1ƦhfRv1E泼mcGQݽ׹g8e=n}l<JمRDIU!e e +n@2E{3ЖbE{(wqE DF)! ,|_5X4њ.yDdHQ|L' )kY"ALjMOuz u,%%@K\JL OdGR=R)7z0k9ؖOcнެ%QA$Y.T- (RzԐ(w(Nd*eB޼ ڔvI s2@j^a Ӡț$eWE%3~==<A0xqڒ P1 k(j Gd^Rb9 ="lZnq =0<0j,l6[+ȕsQ3PvPRsOOI5@b>ZUC*&hGנ&(ř@L ːU $s,P9+up?wZll}$V յܳ؁%&s{;ElxS-}.'d #>ǂ:y) J˴1 y%\S 6JT 'CO+ҐQY;iD鲏yr0ڣ'9wR+#O'ȓuyݟo!p>3r=Ә@zDJT`d“2E1j9MIqZłVH IKs}g\Alw+f)ZGC#IBQך=N ";MUrS^!; nDq9kDPt#(Qe40ezƻ[gObșVMgCgυgvG>.~%qh6 ~'$*v!8ҕƽU/m"qbv҆c$݆X:m?jVz&ꐼ~P+X6۪%fQoܞÁU70,8 hW'I xrB]qrܛճU|*3?4muK6m g۪4f*U\W'WUӜl\9%EK6%bz5mY)/Gt4D 埳-dwWіJ?^;Mft5L J7c PS ~2,Vcަ ziUE9ꁆ:n`UœN% 8хߛ45i4#7g?\)[A$D*Eyjh:"rhSY"pAK*tcU/*MGIRCNkiLdN̸BFXPI@z ܩߝ5/fu{hʅ;U{T|KQʾw]J{gr!րsp!nW'y0OjwlS 7E?TZ؜Tʿ}e¡G?j3Sӯْ{9 ~mD-]j^Qɗ٬ ' xXms̻y,}sT{;1*R "BB|v6EoӶ7O|_ / '>{1oZ5 2f e \ej_(\e*•1D Gp+2Kw2:\e* ^q!8S?W9lYC v2V^[m]dPPw/Gl!Posc\j/?MfU|7"G+E{@˶<;;$jaN+-(T@Sǚs&| ߪmn⏯]-j?Mتy6Mҍ 7ɸꪉkYKE[mPʤ*Fd]w]G]}٬Cov+@S[k=bWv۩Fۮm5ڞ߰U^̿⋴nz`^΍j0 dCI#ɹys[}Uxaӥ[8rHc{ ͸ 8J u:ΑA  3@RW(ИC-R>[و_*IR.:D[i6´4a=Vly9 HxJI9+ сsyS;(}۽RB)fő#9= U]S37Ssӵ w·k6tݓ<[rmo.6OmK.ɽpO.{l>z3z>|ju'Go.)*}(&p߶^*bO%ݽoW7W^nbϠ֜~{{w]]rwܭm>ݏۜyay3pv_>6{VY8>Lstn5N) 5:]m %(F=Z"y7.cbӇ9W)e=P4o=yFV2SNO겣ˎf<.Qwu뮭9Z+j+k} K%ʪeoykA}s|;b`2Rʨ+X*%ϢH92'. :=U5TFbէB&hr :A*HlS:j7qjy߶82wz}n},U^z,S{/+.*{KF66{z#G'bI9$:"jrCILD%*.z"b,})eWs1* j@ ĉ(U(kJ`M U@72vg?2*ݰf y{9Oo9}EZoCͮ.IBtn}!jT,IQ̔ ;|e !͚aQnCsBDE74%bvgWŷSF_ۚJUq^Gxu︘cAnq,jΨ'Ԟ.G=[A#T0SdH3`5)%>C /VbM PC&+&死֮~OY/bXҾ&y/때U% .;:;rhыOT[pXίg=HώEPuIJPXh"I?F]*y\z&2 D-@Ϧ_:~[҉RR1^|zٔm1ܗ2pvIA3C Q&q*,Ee׎2ѫE -$gS%@L),瀲ju]*"?D1E()ͼ>x͎BKMk'>_id|/כ) #Bgo, ^ZCIS:񝗪-G^GWԏEmIJ[ c!$|$Jy G \}j>c#(\cF!9&6`*ٚ;Y8l8'7L|'bxwYܪRomm./uxw}ƆL|4|U/2`ͯf my\2ň60!&ءrKuEgؗ$y@&y`FGFT@* wg9TQhLU}:CI]A1jE[&s@PI9Z 'zHTͥ0+$3$t^Xμ8}Z"}#b@=25I.4t}-ipZQ!}t;!TSs6 >&UMbs@ 19#*Eɷwg#rq7]ŧťB*yt #Uc⎑Rt-Pfova]dzŐtm-`.{L1e;b挲4ȉs}UquHR \usJW^sq+cL+XbT𦳝u@;[?.EDzb,-SY3i-댙l2GUL |Q RuP+P XbQ5)CB%*х9a-e2"LTJ'M~+)>A*rK*1&PܱdP)zU~W*2BR1ptEY~M3^:qdA컑M»K3n[pTZKj4,۠v$k5CP,dm"""ۧy_yKE5>l|br5[C2SJ`s2;,VV)(Q]0P }<;-Unru*ifL%am⏷A"-]n0Ɲ(7z6 C:nRk{~?wHIZr PL$'+GL_^yŗ%ݸ%ݸQT| Il5.A]sȚKբ.HYx t֢!U,\B5yV޶ܛ&!T;nx"c__G5z~y-^Mv |o !&|Z\^^2&`$+&V*ŒN`M:9C$WPh1+&$T#I+ԏuI !S";3$EJi]ꩮ ޝ7T$52%(FVG#Ո%."h4RcKnQ^6\[a ^ɳvIx4NRA7-=oMQ'k+:,Kjݲ~ 1k94-ؑ.Cde`-r)]2k<BHsv ALnnn7GͶ}f- B$ȢwhiFJBR"&QpT$ʄ(;yvs)O#3jN<+Pʟ߂ث,Y܊yǯE%~쏿S E2p$4% -oc+33 Ĥs:D "䵼{P:`x98^AʘU:S@6H̯ڽ"C$.}b"Frp!f J|MGp&ew*G 9KmiM5:X9O12y,W ;dmoSCj q #huRr icKl)[g8zCo yPEa\19 I&Htur0#>Tt~^g!wt89^iL L" J%0\2IBbf`,))0YXi5b$ˤ5;.S]A2zHG-{X@,BėBۍsɷ~1״S} %j r4$BS2rSrzI eD㏇GQ f9'*n &H2Mā2AI7c哉#ѵ#=o I + kFxDii"c" k|(A,63x^Zyr۶4Q}g9>jBӄ᫚3CPbzDoF}qJKtnXcXp7Tҏ{9;-Wn耢kTQ\=[zΟZZzFnI2r+JcZj*躊Z-$dxyɽmv|̭Q$~% {[>@ґri4ÿ́x/߆@;ujЛ,,o>oYIzDSoõc:lpu,дE^|=0@R/y$*2a=֘qL"}{}OC3y`UţN tD'"ӑmwvtƳ5T>%TjAʳ`@8Cf(b-5ڤiz[8@=O^GJnGYwZKc"ؘʘW(#(@h%T!F#Z}:+w*bkk!vj`-Ǫ=|PWHn ? dCBn 4"kO-Rr5b e֕@d*mHLz`.Zkm~.P'P]rk<;ӤKɄt !%X$sE即v%wvr\hd|)c$pP8p6V;EQ MmG\mK5/Ѵ Ann: &3[-BOf JJS}ƅ7ZQP#EE æNX}.ܳ8u|~e]g; @xv?LT\ p $EbɢD %'pҢ[ϲ߭53(UXT we" Hrv͠$d/Sv)V}*%&SBDt_. cp1#&<: m nxq\5NX͞ҏ%?>D3Qx"MZLec8gD?s֏bBS'qɢh~'@ , \{gkC2QB9aLI)v<}8 ^ƾ)fjI\U %jqڢDZ\*$ 8=ZTN IHd8!::K[HU3j۴ &G9š_69N8=j0WOG74rXӴtQGkkuDk]|"1[! \/AflGpUɹ8+[fXs3Y\,QC +|}Fm׳ߴٻM62rN6W`v߮Z8ӑܰps 0!-yߩJ%zq{ۏOߞy6O)ӧ'srkq#0ÅIn$#p鿞U[Mc{5^O|vu]nh+c{p@(~z>{\Vo<*q/ ׃+(6Ee)~ZE%HRPAf G C%! @kNŽ2x]k^dYtUugs.DHDN8M`b"Xa I$bd$H_ay;I}n7#9U BJ67DJޑ*].sŝ{;r2jmM|#)]fgtvYA:gqh֗[uۙM x؏2aNQyٿ: J-F/Ly>62ӉA KH!)pLL09uQ`>!:p )Qt z=o( f%ȑ|&2` ܙhOT@A!T n)4Ǿ2Y*1=b>RU?T(bZ`C8(&X.oqO E~!c~V4>8~4p,K|}"&KPSNhWsD=+lKa f~^.=o|Kz!K+JQgzLP)Toa4iMd 5 Q]_عȫJ8=x1]o$0c|,H>GYdlMiۯ͵+?Щ wlfO;`\16PCQ%] W`'x Gzx5z9~WSzM{W=ʽ,.Ҋ,R\@@Vڃ+9zo* |_*Kv[K+ɥz V^UwU{pjyR.`tpJ+nw<=]1]8Y8AAıDŽEOo'.p4892b3v(Piv[(mjc;ne~HkC\ e5S vI_cgGVޓSA!$HPNFQ_X/0μCo87Jqۘ.E{Uy^wc =<}?YQk7rhdPfY~<'%wg58 QJ5"Vl4tᴯLg; $Rr] ؐqΦ`A}:x+7\WS%R2Yr/ђ^<+o ro*KkÕ|SW҈2g~lxAQWW40=4@;$?!.>|x -a:A87F1cN|TuLʼnE.>F)/m^Ϋ [n,p?!ggcFݑ v܇8 Syn- 64ѭσ}PŸ6n@(J",QB2‘dv7G떫ݎuź;떷vHգ9ӔxqphM(uc' C"O̴z w==0"ALjM<(D3 ,Z#ՁHO/!&ΚGؗ]JDAI|?^/ŪZ‡^M2k%{j?y_/Q4wZ)`Jϼ2SJ}D"mzC=O:XHN;kd4Jc,o5b$3ԁ*Ř[T*۲&g˧h8(2g{_Cl=vEw_yjѭ[ 7dZ#]Ȩd))Z:Rd!x";;ڑjHA`[>Av\G!@bdѻJP `4T#h!k)w(Nd*eBݼ ڔvtJzWY؍ʢۉ c HpDTX6ՂX浈*%&!V!2g40 b`4r\*Jjn) TDAP$Wk]&v v+N!@|[HBBI}ڒXa\'/% A[i6:9F!﹄kFxhr:iy5Aerqh4\p'mB;"]KH}1ż <[VH3#k= IDADK&(ً}dka 5<<R,Qc %I˘K"L@( :hsjY&w%+-'jXޥCr8΃8O(ԛ0V>xںV IJXôf({ ѡy갴Uߩ0 YH y{e[_ӢKpi]B>DUp>D>l\kx!4RMUhoVpPoKLYFY -N&J?`E_ŏeem9Q9rc)8UѦt6y}B]s|~TnX60 N è28&XV)(AKp9Cr"Vu (j,l2S7j(;A@^%)X>(U#ō%y(Uý@$($q ' @xz?LH p$EbPxBiq '!\`@u4H<3(xTXTH E2ȑ o"A$^a#u61%XUJ]qT~b؀r\?&D{m nx_}'wv^U#oِB퓿p7>iI{/~Yid!ٿ Up%cUaP[$W>p9#6qT" 2QB9aǗSx/x}+,by]=ŏl. )aIx{G۟,\l4 5Ȍ\LzR5b_XЀf~_,&sfŷ" ^h۟n3W%;ާ_Wu $G . Z^8l)CUD8HxHxE7h˭G}O81L@ 8i܈(gsI2/ ,U<&ī~+na!tƜos5'm|ع-oui!3SP QTVzJE,l䓖J6o'>=Rh 3e}ZEgAWwo7yڙ~3rF!NO.p'AD ι+YRM3uPn>+"!;m9#ur)$5DB-͒3&HG'-( Ȭ`Ɛ@v5[)g0;A j=lIZFս(YY}Xe{1*4D$rsnNsnюmKtkB9EkeQn%8Kp4!)R*DtR95^d Sk!scc I2& O5Zks 2%kk\g謷!.8>vθ~zmJx'F$ͻAP)pY\ļGqv2u ^: ʓaLtP)$ ^HTflW 2hɀ rFFD/JHRJ!S4v[Ihc"1MPבDB0*QYíYec3tZبx X'W7e,1UIB{o[/oWjp95"eµrJY5S=处)][q ɕG<K>eoui)ܺZwX]mټSr΃]5w 6\p57}phM-h~4LOFU_]8xQ%w؝ܨW$ĚtӦmo5W[j[ {/3>GkUy١n3QtJ}cF%_AGY>{yڨ̯zNש u,%@ =+K&b'n]zm5X-s,rhy \',Q]?sSÛGmFpԛS^E-i偠Ax1Id<T4dLi%& %f uB f.gPE7YOܮ4~xsݼ}skd?Sq9zc'o=$CcTmbY9.F+S(pJ&!*rfLPoR'!|*m'=o`m\#c@9|QbdmG0Ħ UGKoY cK JuOq~مjF /rFyTi~pYB #ߣ|/ N0$wI[8 j#<,'ٷ̟Y<PNٻ޶dW|~? lfv'3`30k5%(%﷚l1%٦,)ej6O봕g_E{huݩkה rҾGc?^CpiT` .=W{7!mJ2]ݸ~kiMNb4bh×PhrTۑ5˶a }sՅoLr$Ta7UʚkNaP[Ǜ+iw0Q-M(d̈́gpfrg0q-iֵ4L:ASY:zE5K)>/f۞":l ]4&&HVpOм)2Փ{Qh[8"`>-\&D[x|ߥIx\w}}G/UJoqXa:a(U Jw)lp`jիW_H^~ru/vL6ù\768 _ E]|mLN ~^9TUO*{3ꗯ,&jqhwɰjܠxMјYc[\{bRģ&W]M`M' T<1'Znl7+\bp%g.*Six`,(U2xi23Ń8=;>~ѣ<ɺ6-QP eXRFJRqH+DQw[b[;G Bjp]xQ:_/ַ҆wQ)FMtWbLغ84;zK&ZvnsrpV*Eh+p\m 9VN'AògH)EԒՄQRz#SuQw֯zn).FIJtB9aMKM,S4m='V{Gȭ[ ZZ2":1`gU { qZΜx} v[&=VTQl=N;1U%hWoR0#4'(xAcnoOUTZz[O&z0޶ՂYçmUgp˨ջ o& b2^^%,ѵvWo.vZ/X鹵LGNKf)ZjDU*%ҵm<aw oHhWsvaZ{؉Eo)H(@ɌjR@IH1BwK"P0T8kzERci'0x;ƝugyAhwˬD1"E8I=LBޯ%oqqz}i[ԈbH=oz"Z#S]`)L['5G Mq GJ%H<āt+HXG_b&%.35FҎYYb*:SKigٰd4=sM(J襔5: z"B$9^yuJxNSl͗$\jCBbGdTaXK,hn*l"B6xјI&r%|W/W`N=Fc! ,$4T{R q$ nXF ZMYHROa.tY0]kC[)"Á; 2H{f>(OHi1U%Ȕyi H\><'.Ї(Їs##8&lh!:PcR]bA%ko;G 5I, ^luBI *P j4!P,Xi9 kκ&ONv`}Gy ZBet{WmTM=0\3VrMǁXkpayʉД0CbGKı:EvJ1?g=;Ƒ1Z,eJ,jӆ+ń29蟋3XUNQټL JB 1hQ\1 A*XEFӎlg^Y-U %(,-7c8::)Vs"1̳P3Q^)VE ! t~ 2E ,b 6Ôx Rr,R@TYH3 j IOz }؟~hS-6;26H #RFW1h6*Lg}aL&JY-x=dFD {n@h ! N1+a 7 ~ZAJ m6 uvj_3yo ~RRӱL8f9+ 1pwe$ZPqJ7_vt&!%xVAYVg\pIY_R.(O?%)dMvlC-Sv8%8bu(%[Ѿ@/晷`!oI2Eȼ"^dދ)4 $eLi2BQ4 R=Riui63z;r5ncS t664P+Yaa\al[wO"nBr!V2-hr,kXd岸Ryoqni8aRv*BXYL^jʈh h#2&"}o(Vz FѾ9A.'\CHh\ hGegR%T8 y J}*$M8%2 ~Ŝ(aHRbX\ړ,?;nM ?H4M&T-OPwhi\LN kovS% 25N25N8'SdjLqR8'SdjLqrLq25Nɴ'SdjLu-Sdjhz@JTჩVeHJ[ZIR\T+!)V3WfL^DfL^B}fʚe&䕙2WfL^+3ye?)3ye&䕙2Wֶ䕙2WfL^+3yܸMMLn?mUi0.aRw%EI%3ދD~ZeˣJO͇ßEa (L F#!s6^*ȼSUc=HUFK)#dC"-{^N]vxD[AZƠ{ ^ < gJhr{Nh1eQAd@RmkaեyR11Ks1&T}~ԷjD.`n .#IIm:+0LaLJUqo'{ѽ_a`O>ߣϽ4zK|U?Gq}U|7<09K:V$[4wz/@~NvQ>> X ?ah| bgv49f87׍4BǓtFtacpuKh֧ܧFGqG S^q|;I3WAM!H:Ca;sUt6իsTJa8zOI-Yw"%6<ҖS>WL֑|#jMxSy.,h\_ v6L#PRf7oKGwdF J#`2Md344ăCܠ%Sh%k߻ex"{<#t^޵ ao%2xM>^oy!c~Rwpui9Z5W۟Bm6gݹj/о&=BUb_ߚ?A1jz@g\ClTeC-ٰՔnzOE6z y0Oe"=/яuު& jm_xUUoy*\`'ENx1_?@`IHkms~q|TG߿A2+휍jczt5I3}- WWT3]2tjσEV|.aB6#FEW܀dv&ZU\}:DKy؉f j㬘,(8sZ*ZЎ%@abNꄇsٗRdFh *{5=x&LS<^i< cyVzqR$zUo݋szt(:Qvh(Xj#jIgU1%8^l\L3^uHX6ǣ4,=؃gŦbkD04۞g۳!qO:V5,Noy Ci8ؐƦTgT}UEZ|ov|\Z5RN b"&?ygLD, Y9ǧX)4TujO~zA}Aa 9"o %8Kp4!)R*DtR95^d pȅH©I@e$gSG'T:i1q6{~]Ϯ5u&U6lG_vGpZ7L.(PLR>Ljot'Û*0UN:@)RLI2eMG/0. @@ ȣ4uB=RTrh@:ERT(D2Ec2"*FD;MM.x\N)K$)P5y/e1q i%t|]'qhI?Ժqy|Z~2k-%JVoJZMu, uϖ MV]N6]:rhSեDhs\}o}1y|w]vEZGNmc%,[^s>HW7W:?f7_,&kχzMj~]xl[b}L}8 :q7K[g?m!f61^ #.JtY"U"h_!Ek:M» dvpO?qrN0$wI[9S8TL3a&YKY$^doC y 7,ܪbU*K!Hh6NKT * NLc.$!lDgS3 E4e'b(@)E/i%87yUHhG$KX8fTQ ) #b1q6#!.NK,%"-j|B2 `"%԰$PPIrkd+##\"Gr;\. Ҏ]+ w?m.~|GA_ir|՗] ȘpU 9$%LOшA>$ҺS4.=(@OV>O2NL J"$z41r#`Ͻv\&ɼLh#I.8bQvZ҇*&hY@3CMP3@p&E8,Yq Sblс|ص KˁjG<˰f5 puf^scS x<)CZ)`<ULEo*Ϡ=P[%{8m-2yg%]ifOS#k sXeA _3ؕ~ξy"FrXtIJ>zb( 2~IJ\t~zbBJwEJH({6pUVC,R\p%9ںwS~ wz2]޼{7A9yfC+ӄ9CJi"ӄF9 2y4@ 5A4$$!Fsv` !2ʆ.%L:H]Ou42&"XXy+`$SХRlvap\;^\νKϳ|y}skmQ>9}˟gUs|Nb:Oxsy<(0 v y8f |^5PޭpdBǑs+\VHYm#YmـKύUyHkUrN*+J5Wk 䒔m*BQ!Q2d6آp1׍nqsR*Rwt}g=ғA"؃HЙX:caJ "w.r(8ʉaxL@'+7ȭ2\!@I 1jM)10:t+١rkI |撒HR )(2|}ցJT't_~%&S-jZE=S-S!SsTӚkPYoPN/ [V.뗆ٸ8vkVꫂc‹SzyEK&~G33BQ6^m@m1,@7PJG.A[c,J3~7`_o_GVG7FOA8H\H)&rk&"NG92aC{Y}VHI9E !1;j8R!]r0rvhd:tY5?e/s}7v<|r@(>.e#6 ʃN H9Ы҉f8RT$)Wn6Ar.ӓ9Ycdw*Pj"$cBk(>h^smnI\'PdH5f02K4 D'h#'l0rvZ YͨO˛)Ѵja *MJuLQ|RAGhd,2fgyպ_Exl'x?-?Zc%*"Fs܂mels TR3ŃCr+?c3[A4 (5+F3;iD钮7'XF{>p󛼓 ?Ddn4f&AmSGXs|ROceQP*с Ob0n9MIqZB.#1\&-A,T7B_ Mh#!CC(P4g7q!xB3Sdka 5<<佦jgM()D)9m$TeA?^?%a>FGU4 &2Mā2AI ty p!)aŹ´I=aD P:eڋUE\Gbd]u="5xcQsqiٲsiix9.sߞ?rk o.Uϸi} KnޖovX\wZ7ҏxMmNX9qrЛó]ë]gd1-#8 v$|˲ iSTKp=d~5[\dsYwK{5?f[M|\}\6y4ZΦl ٖZ8A,g\YU4p:3"hI1Mq8)6Ti,~Iߗ2I#/5>lqvzRݯ_۾9{[p)mR1)+ѵCu:7cy+zV\ci.4]f'ū&! WgʟqlNU6OepaOnWuvW/;Nr>~{zwg!sdm|EnG}/m7N#|}& K|L1x'H3s7[HgUfUWRwiEjg<s^uk&Lo 1k/uJ? 5P_Wn 7)c9$nu_I@?$ҁ2) ~WwZKc"ؘ[:ʘW(#(9aDQz}:+`ra(ߡpukDи(w)g| Ѓ$y}qtׁPP&m] D҆Dĭ;4g--j=ϢqhRREorcdBD:0ɜgQy"(FI@ :ls OQ#e ƃjǹ2 V2  0t5ۡqZZCt{ˍ%GXr+0΢U}B3,.t΅)()9]koG+Q2`,vnpf8~ʊ(R&)?3$ERdS"I`Y {jtUa&&w0HɇY@Z؛ Sw,3N6"Γhq^jT) RuT2r+mLHL稒3%s~K/vpڴI;Λa&l̝7qE>6ۜ$K=/:TD3U:u&ά5g盷KD N)s9\5F H@B$ 5OR/&j8Վ`#U;x{I׷=\$=f>,>Mt3Vÿi3xIpgRSDU*TN8S%&eڇN{FmziV*.BX9LIE_ЄI% V[ <XC#w&@8S Rp|mq W9q}zjL+ק}yBSYveۇ]W }36oW ehn}]1=*d/ ^u 1s' d'yS<˧bWW ˜JFU6NGۯ-nz"܄VՔ?0-8ԜcK_|~0JxT9RV(P@$SC/uc@:g1MY Qp`.tf:Z1%/@ac;w1tl[[! Fۅ^ꡅ}p'ގgxI*7N(8+<I,szt(:Qvh(Xj#j%.%ўitR /I<ip0Uk';?c+0E 4Y>T36CBrgk\ /`XoJ]Oj5_?_usW\vm®ȵ||e?a\vW=fwIkas%%;)GqL] ^HęAW`lo' ?񃼌#]tֶ=4C'.b6xNu˷ivampg3)_USiD7}tW;^~g|;yT6NB{ge/:/]e>08C)o- +¥ 6(EWa%p$Rq]kV2Aw3tV@{`*ιHo|{B\kD$W4DB-ME Θ zf)a_j.H׸޼Go]׽Ze;CN<[fzǻ3t~Oە 7_$o.SzM٢jN_zVslWY4?F`c49b'uecsh{$"*)dU"h_ heu(0I6R踄q 8hqv'bftp-\˩h *&PTh0`ѥND+L1I捼KϖUY Az@p\R8P)WD]D\p8g3#eX.FO%΀d|Is][y#8$HZNѾD-&/yjni.;Cbzr4ŷ m}*o>i+gA[O <P1@C.,jR t92wU s,PK+&fmppdUخ+?VT->kef,|]./tŔ!3S\6ULEo*\0\KP[%{8m-2y's'oSclBjcq/v<߶ +?{|;?F!!NCAϨ#syJJS+D3;}*mw~oc[D`x1Q%,VltX^@ոaP#݈,5|#HP=GTwG~_g;8kZjn{{/=>r }^q@!\d3u8;kQE?=5 Y}s|a鷆i57gS=@8h/u_>` R7aBx#8i8Oqf>ͷ4JM\뮏73m >.?i O{gʩyA%Jq\YReVN-ⴥh7ߞxM8 p I1>8p lpͬL1$A#|W$?E}p<uiG~x: Oy?&pOűEOʙ"|B龇:C #vB+y2u.Ϳl^ҡoQv1{dyyLRs O8t;7d(\/tqM{ն.Zϟ{>{V|g$<'zJTJ 1`4Rzk7*Ěk> "yvQˠ] M1w *)"qRDm<8"\EFg}|LY41TK{X ːf<I{$BM>*xXh3VErU."F}V%mL T[,:t fIGcBenjS+;á4M*z9 HR¿A2YT8N4Y/˅fB)jcL$j"02B.< +g˘̚!*J9ϒ\ &dɜͅW?bbZRcMM 9W b[\tfMnSFS@0\-XV)(ċ] E ?%nbť>,̱lOHGxɔ }{== =Z^rЬ]>X40O_z? WyL9=mB2r6B U~,T#7k rDI)v6gnty0#đ{Zh*D-l$F:z>Ϧ7N׌9^oU \8=j0Wvy4'W=?6e?uTxtkDc{›ea@p}%̙8[nviqO?\̛7v|չ8YՒP+[qKgaѰeey8'܋yG]9<=6+[ed}\gFju{bϿ&C5IUCz@I/w*JԉʆvST|_Oc/ q=0o>^%#N߼MRMc{-SwM#hy=%6_psl"^ gkʎWPlgmpSv[E#տnK%(VZ;v7 fW8`h*!*2ٻ~HZhY>}:G,-ι!%e2:%X4A` %$|X#=bf .͎{:甬IGs|WIS/5*ٔWx)yGB:*)Ā^Wy="{:8K.b:c.|v:Ƚާ6WzYՇ  @؎&NOƳ(.qŜR "腩O;yR GZXZG OL3ub"HgNE4rV*)H0:h h"ȝ D$4B=BϚH֜ >uzNzSybm}l?GgV7 oOm˜=[Bʛzg֟Ht_'u}A?)ZFL:7 4Q\wT:(*J\83΁ LSCX9M@$4 )uCAD NJ.> Sv172|c79b;^O2[&;rJqU8@3R8'ѫf/L$7C䴳FFC4R9XV#H=h:P^r <ﵳid2O{ ?ep+>Eh+z- B$Ȣwh- {@K"QpT$ʄ(eXk&}Z#л,hPE ~nWMo- f 5&A(pqG*,ϟMŃLfSb9 ="< "d]Ը^?Kx10r.jfJjn)6H̑(0p˜bIt佌j?qI Cv:Oq^D VIG+7tWԷK<=爳_L}#g2_134OX}]]#Q.X;6y=B֞uX$jfrGR*|CeX&l7x2kDϚ*{>kZ,t 6Ug ɹ?Fy{{ǻg=b]V;&u VZ j A3OtZԂ 3u~AjAkB9Նp [$h)iЄHIhɔ$n\Xє $ TR$2A3⩣FksN?"lԧӏTvX]6E)I˞gS%BQ:!3ЎR*Zt J)*Ib"NjLXvHh3nbh18o3 RHF%S69k5"۽L6/%E͟a9_6\W my&^ni/gb7-*44Q 36uf#f7TW=Λ:pK7rh,]=ڼQ S,Ir-]0hy'ec.W{|]D@ݢ楒!OoqwsQ[dl|M~uCwuG.=lWxEm:ޱ6NG-:4TK[o.8g}sZю =O5oݴ^>Zk4a[j4Y& w.;}sف˾,OlN0$wI[9S8TL0`ѥNDe/} z/k;ʓp+8 LTB^+ l ThEJA' - i8(IEj #F:*q$HgA-Gp8HC*N)}ZL5%9>)ӓX|ٰܲ _;nQT|NE΂y" M#Q4n',H/bH)dheMU !9GD:$NBOkB&AV R6!HFblFrJ1,,b.'?H};Ȗ( г˙f;-Pha4?qF"&Q&P2u+'"ȔÊ@ Le!l!ٲ4w( ˰ɥ *oGI L#xK!g3b8,ŴcW6Q[4(@ A#resLkg^r}f.EqmJ AMK_`x9h}j25\Qˁ8P9}-vZP7me!m!-5Қ&\<ʓyT hFkQ+P/Ҧ^;G^+E©ᥟptorO?>{;^䋪gpYnՔGyYY#ntg;_{5 _GT~YxoNɚ {=iLkڢ Siw=M}zE؊;TQf 5[Ab4(^#GT*>s!~X\ /Ff-Q1U}gS0+8`B\ŗBO(: Iu @D颶%ڌӮU/D0UdR3 6$Ť2*8D ؜kffe$INEi7cYe[:3xK]坞e~ȳ ˙V游iYz:Fs1Gx[p/Ar2:LP+NBikT7"oZсÃ^R@[H#|ȗs -goP!H2  cv$ ybяD*g!.ۀSZG/yYBsޔZT\@#Atz`vV tH3YbSOY} P,m9]Rt+y.Kݽ㥏gT X9/YۣwW_`?MTHfr 0w +o=ߙ~[2$ _ Ńe?POP$x q+ˤs6YP{L.&HYE| n͚]M"Hk8ODjA hpXg3S)Qw%R9nq O '+7{%)Ĩ5t:,hac`ֺFq)<{jAPT>0 0"~䆰*nj%Y3(Kr.8 IGIiG1b  Ɂ+D:υF IQ#euj6V#O(Cd(.䜦vrֺW6%NoSI )! 2\u=LĄ9B: ^{f n`ʞ;A;pK@##"k%G?c\ -"1@bfJOwɝsrP$z.9N!fFJ縉B "NGy9рDMQٮa:g2isarRGΣqQ;CaL!EIV* [" 0(l & Uݹ[d9+jW5 ٔ;#q7gm?#P HY@Bl`N83x>iswx.isoܹZviܜ2Oqi(㨄@53ȒiKĕ:V?ā"=$ͤx\sq' IFjA4oSj*1%c*q/ T Z5?(IB$`345Zks 3%Bu~sY R. ͵!9ry~Om˼epGr1ɳK3RJԶd!!uS E|ǽdtuPe<;i#ř(G ]54L&}RG:nzA:aZ:,(=!V& X ܚS>J( $! 1dv7LJ sȴ Tx㩎F&kphiJ#A2l;ke.!@ϐxbnCBvfԸx|VTK4BכNtb:%*dq!e%( e֕@d*mHLz`hkk{jw F}&RIIGߒ !T`X)dγqVN$b]^rͥidB r`jprUFgDk*E%"WvP*澼V|L2xddx4SU?MјJYZ,?F%];ѹ/:"8lkh5C>fa9)2iΡ\ "Ydh) 8JB]˩jPg$QhCQUXTH B9As%U& z#HD'a3qi L %hri +,1qɣckUa8)SU:g7z z4z +6i1ſ9jd5)c̯  Ts-S/Z^?NCTW1|O5!CXQL9?Vs_'(cߔ{?$RUBcZ\l$F:F1Pc>]ԏgrQZy]br2r 9zs.8GpŤ*(ۏ@HSKB5buKgMͰf446H,oC +3g1k5ui*#w:dSMcF:^8ܰ@pQ0Tdqu@i1GDx_7ow~ݻ=zׯϷ?w~x2}//޾5w8Y17u> ><˟iVެia9ks _]ڽ> Ube\W O? qVwv-Zy +md:t7F]_{%f{B%B47 s9  M^Fi[FW̾qɲ" J\rBdtJh'rV8 I$bd$HOlc5?G 57#s\rx)yGB:*)Ā^Wy=Vtv*Fk4$dgG=y R؎@pj18$9;* ZW#~ 5%*Y1_E5_Uͼ2N^o r0ށ1PkT鍊%4T2I 5O҇܌f8T3:M F:F$IBL5O ؉|xNNI^8]h5UΫ _擳^oee8Dv6,ъ߽,^\m0T20=nÎׯ :~xU]`q|&~x&-qZ[6;lek1~)sYI.FA:LZT~RY.@[map*8"hai)<1Δ <0h> EKT%R( P@޵q$OGKq@%l6MOM2$%Y6o̐$%)pL45=_WUW}e@2E;oG9js%p9@:l1MY Qp`80A0KN8iq$H x|y-JI(tDs0tl6Y+ȕsQ3PwPRsOOI5@bE* I!1438892\w*G 9K p.GΊx7S81_;nj_-ޭXPË3rB++FP!|:y) J˴1 y%\S 6JTvSxFig%t!.&Gq0: I&HSU~8E ?y OV׭yr+־;W= IDADK&gCZqgeWo[l׈FV;lؘdܦwqU\_uE$[mnڄ<uTuN=߯#FZFwaoCO>lQP<ۦ{y0T=^oL>佌|~I/u*+HЧŨPAB|}|Zg(%\p4<~?+tGy7ZxUd=$o+&vzh4%GJ{^A ^SvLpڿWT-mTr;ijݷrZsvޱpϛmv 62 6ܱHUr6OEΰMxm{̛˴&zZwZCsw@u;e]5K΂,D Hӎ|L7d wGpJ&!U'&J-%(;"2/qҀ)b-E寋:*㌟WЯ/[ox^jHi-L8c#UɘP4FQdj@h%T!F#^>T{EW_lnlIWV)<2½Ǫe(e!8i[ʆ&j9PVOq6.vC0V}B弁J*4Q`W]v lҮVmN0ZNr%k8C&&PThLIVRtV'"**FD;MM.x뜡P F%S69k5!0YX-&B>uj{$|Za$Dp&7" 傓3\H,W)[@ eXM%d|Is,heb(@)E/PyƓLZUMP-|b'r|/VjyrVp6k-'8'gIWvzGEp\ G ҋR .\l5UaV)Ʌv!qz^ RVl :%*I% A42g32Uaa-X;,y$"bv5nf}jgoTv0]WH$QB&qEbQDd7{9Wb\`*(k ɖ-0dcFCQXM.$LP m;Mb`"|L+]hL}Abc[6Qtif-ٌu&uy>:͒mqEbkg|BeAE JaI &ɭ&p.#Xeh;Jp;uŽRc[Ӵ-J]gf=ۥ Ӄ3ɟryvTh#a dcSF nK/3ޮռ<I|: },AlɬSh;L)@&̽/Wɧ\&&H]D+Ǚry]3h1h0懞'+L PB7Ns#Q0@^;.d^&Ariq&FRa.;)&Olۓr8_S ;n5);$2yx 13JURh8[N%pŻ[dװv`nWDj/7y>o ޯhbo8距}Bف;oBߞ qќ]X7z 1$Ϊ0_IP&s*FVYgIœ#7&[xJiC:SH\kJ=ԒHYSTwIthe0u=;*f YA~k%zv{*\aoYCSMw85 woezͻ}x6]WdٿqM?߀`R8gj_K5{۠l-m7Tmi%)i!jVZ r 鬀Jed0Z9hWZPN֩T;|;JŜl%KyW5LNŵ7y%E@8 )"f>+5^d el\Xύ$!JʘT&H@xF2_ScZ%{Ǖ>|@p!8Mѩ/bㄅVK:΃yCYT PgIo.zk U/7M޽h:O8Isw./#\gfVI|Kl_i.;vqzj,3\dp,JE_zϼ!B֨^5zݫ׭j^ȻE[޻oMo*|6  /Q<0/Ʒx/ͥBMG_Ė_n{'iGלO7?{y{az3_*Hzw5:\ .'VW_ b;&>ְ<6DW9cbJ)k:kdCkX5Z2 HD B;J>jNQ$1 Eh,[cnYϥufo ]û.}-ZѦ#gYjlݶX.abIbfn]5 A}XeΕyƱ]vqRZhM˦^go <7t܊6]-䥧vԜ?oxCw'^s'Cmur%i.`gqxebg\U_ q2+aseEHZWDT.j;"*N.WI I1*h6W8Yb;IkFdUP D9 }%IԬZ{ZL\P:^񄣄漩`~z&:ujGtϋ#.,qGԢ4j[ H1w *)`RDm<8"\E!#`\bc/c?٢"5r"#Z&<)MA7H(~vw1U"Jb GbSTEjnFLdm2,D F5 zU F1H󶓓:rB c n p`qM{[w.8Oن6U8 RYAD>ldɛr~(ZASa$Ȝs 鄣%)yy~inm[hpg}g=cLqNJMd x0Vh RۜLSKtkT5-I+\ORp@t1*Kz^gO=+/@JALP림!Q-zAI^IcjFj tКM#ɡjNT_/ v )% Bt$Or$ ),j!Ő3`ժjٰ:?)F7f}I75:;*65H3JˈЁ3ioi6vsVCUdNLǓ'oZ;JgԚWO3}iaeQi%gy)h4D BKKqs'՚ZtAlI,n6iS69VDʷC=*T)N -K0PBv,ּK\cIIOt}U'SV1qū$# @RA)ZNHǨsU#*!U^ AOٵ~}+ԤbGCzt%f.=T'j]ţ#߾vDGPowZE4|sop{}UG]pN,3H iU$OY;R}OM%%F(=!V&$X ܚSއJ($! 1dz#&YfEV9*TG#u`sq>$y+`$ST68{Y&1@s]߬Pe<^U~<rɢா+bퟳ%0JUUEV@WI+]u[nq,}.{T[u q/qK/ _~d<[PpQPVPգ8Dy#Q"<ٜ}Dn= @I 7Ɵ֢6.oќEﻸR/z7(gq}0.7f?eթApӚCpsuJE;\\;-2W+b2ZM7We%ޙ1W3Zd֘+Mk/x=\٠3Wgh8#x sp9+Dv(Q:suJ &7[ <Ïǣ,j2SxrQo6R "Wn,ʕoU hy買ȸh1PkTፊ@ rڑ$^Tܻ_)ūE8*@ `}-E轞Q%|`&*a~Kx|F>뽏wZvߣ M3,!cI PQZJ;Rk!G$*yBsjiʣ|}7z=DV.2BMɰzP>)(MZron%,Gw/Ǧ]X]gܬr+vWZ]TGlf{i^5n)n)iUpl Uho:U(R53E Sݞ=e5馛v|̕idHspj BZ+ 9+m@ov="/  +wnv܊2Sy2;[^4p(w[Xe ZU+s@ DғV 1m 0tސQBy6Ee2`Zc2\ݚ#7W%#:CseѦT۳W|k!h 7Wqә/\=_ӿOn,_!:2eSBò)wة2e[d6[c2\m1Wyi :s*)BiB/^'FspWm1WVƳ+Di+6+\mts hh54\!JFtg\<-2WjpukbWVƳ\Tu͕֘ \e4\et S 񶘫tUF)7irfP#LK"/XNhѮ%;35SS-MiqJn7M$BFw nokt9}E]3_q(OKAVڥ9/h?ɷ3/7wx0T{#:#~ި2Oϓy#ed۞i.uݻoWA1WÌ*+&Q1PTF5`=)v]6$Yvn__s9,bcopA N,2Rg-ޓROmZ ;c$:'-uSx>ZhVDAZ:Y Emt!ָ>ߐdfS˓k0@1X!u;Z_zНa_SE>Θ  'cԺ#G;%]3܎n}6v5F>ZW3V < B1 ש7k# #,>X.wܥ7ea&^]xեs@kbVfWY]J ^j6L]C/GkkJB8qի}A\ \> b.L< GdJa9ѬWm|M|;.L%@ vX+fGGg}$=$ѝܥ Ϡ>͆QfW\p,QpjuV 0c9@(_.^T=WR)?(aq<7 F՚*V|zkgp ~^Qoa\wl|ȥs0G~17>ƷWhWRxOζp2ps=VxkQcp`/VG'ISnpw?ieRezt)(Չ(Tҭ.J58**:lI&aq3X(@#Q |׷uЩd7('Ѹ>j#L+_wNDWO*qQR 1`4RzNnUTX֪Z!:Pd3S,CAQ1xΒj=YkYpKV(s֎ Lz\9q/_ EnŴժXKP. J׷Zu'r9+] W$xܸoR;׻>Kt|WU+~u#\ ٥kz{F٥e;N.P Xsm*@"$ p[Y$Ȃm/a(ʛ9a쎉rv`)!A De:camk#J#_l٢D lfv3`2zPq_nLvqQ?ewIʉ]*Q?RSV*EbCبEFu-vUVF|`W)2wgdJ@YBFuonT8}$`wȬ^{FWJƉ[yvalMbdwV8zq@[w4 `J1Y/Jb+ hL՘NiHYA+ĤZD -: yZ$l̬\{ҋ)%NoS(gBZ)G]QRY*R#1*}$3Slrf(_X#*r5c&ں,Es$"eᤳ%%Pt˺Q@EM3h]1@lY XŐ$١Hn`eme ԗ;,W^=9R l"(נa(XE<~|G윣o 8Jhb DIoLʓ FveE4X%7z4lY0:/&EJل]qH.z!Q,K5F 2B΋ȹVKX Sށ.{Œ7Fc*;Rk_uʷwŧh !j޹gdH\%2k؁d__ -SAX,S@$JX3,i$54/<`# P7ǃ4IOw BWW0j-2ds6H(:zυk . Ԥыb3æ2'$t$i#uFQ-!x5)1*d!YA*& *3Apw+hpā0mgl<г]aަ]O|%-b%*"uH4IA2hB:3][wGX&MO? w6C恸rϪbMx?'_EH,r IiD?Y$ &OlAyǧ|քc:kxzCɖAΩM Q6hNɓ+~dђc15H 2P(X6"uEjV# uٰM+SZ P:$eQ5/\a1:IFs.oi7rnsץ.%#lҍ of<3O7ꃻ!;xt]߬DBl~q䨏X+15^5^:ۄbT*IHybw;X ,;Xw"oJBS #i*J%6Bx1jL.b&g_Ϸ/A  :)9| ^sL™x-eJP7r8Wf!w ! ]jϵEKovtizWBlnظ!]脮W"Se3'«Zpխ2*;cO{ńhBlP҄\DSTH(aXo=(C;N|{wP|KR|ιxA/s.yFF$&̝dooL&B1R%05?ڽ$uF R <"zX@`30mI)[,`}fFꗑW-*dc$WY KĨU&xJ+:R\9E9Y}#cC:dI?;{oQ T {9 =:3*l,AaHxM:L.hkrGq:\}XgM|f=?NC< }hS4B-1#|FJgX9K[*Wi ~ݴ3:;Qmѕ6-Kūk?._w^7^Ϗ^^fAܚxRo}E^r&OΗGac᥆/OGi|zO߽|?A×yy 8̢Eܟ_E ϨU5׷Z9ܢj%l÷z?e4*5fi[R?^Lw"O*J] -ZsCu $L2#L6،U-UNf*V8.@r0 /At6@}ѭLwHkH(`b ŻR"bFOEpf E%e"NΆ0yC](*XXI lYKIQYc"[VÚN/k:;C9PבWa4a;hl [gR8q+z훷yGz!t:@C$|uMZƄ2ZrC}6u'Anp{\uWZFhp0gz-W\{nn?nŒ Z1FJ9hFN2b@*l!O 0ZyTɝ`sL PvX\]ߎhtqxhVGU h&9*L->]g_'&LSiXqp6H0_1oJ vU5hu+<ꇘp],~C_;Ȭ+{/)} !.-ݶZP>ޡIHG̩⋜ ׮@'IxX(kl#1 J@g:7ʣ@ o4ܽLv(*]ٝh(=s5pA:I+E`IGԳo3lr/{uJ6 U6cТ.HEM6S@zS2$B(.썜-g1l]pxOD|춱 nhS]yIM8-,ucoB$/E.wXZ7[hLHoAɶkaAb<>@ )8%X$CVJ T,^9sFK|lgt}mo57/ty]1XOdYz1kg. f5/D6/4=u}%vFkA}YZʫ0{}^+|'#ظ')3_X##/?|w/jK}?*/͙/ZʌPLYe)BhUWYK&۳?/*BU㯈bFCEre^S?N^w˟WiwVfuʆhťvx/3oUjVwG _?3??+BT/6 &NF<GgkRW`%gw5;¦'ͥ@*3׃+f!ה!B^ߨct?5vm_RsF9IdA$)EB zg<8dtPDP2Ғ>|k.|+8&"wց<^AU~)9g dZ0 0>bkB&[41A$B˻w PW,R@HNBd /<  kRޣ|V#F|FLpRB9[9`g:M=y?c$KįKfY h}SwU3;rk:ռtʕd hD XuP2NZ# 3=AtA|m)sTVB4JhBa(Eab K SV$O/{-~A'_<4MI aFȞr(Ţ1+sT Lq& \dB͝aYB*E}[vx;\g^CW^|1N{x٤AEgaT|eB2B̢ $mn: 7ʭC#BD $B'EC#OC8Թ-D]^>яY?{ƭ-N! 8@69i6 >mՊJr'~+k--ś rgǙ{3s<$JO*Nݒj*yŲW ϲEw>έ*0w=TC.i/MDumEC\[YL)H9#YE_*珄 yCղozSrzΗ ۯSWvy:wS_isRZkN;?^zV1]H+7f;vgcmga }J8 SE/tó~_ e~+&ƗaQIY/? &7'&?'n_AG ճ 5pCwxKPN  QN6.*^&۪_|,6/麅^!D]ۛ~.(t~d";ŋHS4< r&( Bs@I^] Z)Yk,۷_u$M ^5F 5*밤$j0MO¨}_ز7)z3%6e|檷U?@U6{xYߠu׵;4Nm>H=(p B@_%`[Wq֝&C s"D'[+Ahɥ|wۯ_~"2.14&fOw.(fK-ˬn$I_(~C,\'_W?Wɯ`,\4UlڡXeSX2 TܳH0mIU֫MʠlV5e\U$gWccsk}G͙77tM dq\v *w+ ޺,*$FT.t94qwe'8ie[AmTt8A[eLb3":N@޴˾i}~g8s 쪱4ydqEF,e2SA1f) ,t+R͞ -,0vGE380H$bXXHjc<J#0Kh>t+ȍɱ 76l.l}EIQqXL#b<dќ w+0Äv15ah-J |dRU`̙RM#ccGlư1 UXZ,.x*YKRʋc7鍗7~Y 7c;|L L0O%TX).I=q a4BZ͐= D! 6N0+D{CJGt`ZD"rR5؍k.桠vcڱ+jQ[Z4a;fAD`3BJͭqNF(Fc i#0☁ a  Xr0 0GׁSi0YS?P1jZDlq \,"a5Ra VIc,H\r x.*)iႸ("jo8XG!1Ą3[`l% Da0IXÈؘ8`Y1pq<\:XgcZ+.qJ` h- $rΥf(B{JE`Fjp#P|jqqx0xؔv슇a<$ XW%ُ({ۙ8PӃe3i Q2wZ!Q1'9&O6q?iw50qXhcPl<e|Ѓ>qSD\֕/jUr'ʋ3 ooMg8>:GٝlQnBVR& 1A}0A*1j K'(߄/F#Wנ'np'ʠm]GratJqamg19<10c~ n0 27nO78gE^o~2߯"4E"7~+w `]F~\ﯯ_KZF~tD}&RV!q!]4ހBY@OG,3*#X pTF \h0 +#JsIଡ଼oaWM[^ሼ4[Yhފ,op̪p 2C$o|לn 3$5qMpX(ڜ-rm2ZN{97x_+;/"RvY_6`R9G疤x189bzҚT{) ѾNASh4 aJGzJD[B1`yH{)lQ;H8M(rneZGEdQ0A[R.k%#RHDcѸYo!p+Ar7T eL*KG4*C6ד*sq=+eL'(b8PEjO`jM ;WhCAX"Gm&o`TmksCV  0hAwO`TGcc1ՒBR2[S jiо#$GW X X:\%)kk!-\=2@`ΎǺJ⊣:\"\i3rԳoaȽn{ӕ*ERe vU`!ᴃ,., &eQكΩ9NYJ@O2L[b &^Pll>8xMC񪟢!L)?31A?KO.ël|w W̳l|* zTUN5q+Ev\crsҙxВhғ$C]@{vJcJOJɦLJz,p*zp䴅'WzTII`͎@\ƺJRzpmہ+aSDDj;c'o'|5} VR* p[ڵ1ј#$+ t,p}%'W 8ea;%;J*~,pRDpR>"5GWI\ :\%)9k)Տs]ϴFk7)_Tb{}zŦ+qo (꠳uEyLBM KهW3"z?;P}x/72|<dRP-F^mFM5N~8 ?ldEJs^FxkH8gtĕtFʷPiO-cR<6(0.HfRn۳Ͳd 5d~2/ğGnos\әQnXW;ףrRU X( ǞX,f";P epI܀69p=D"p% ^iߜF( 랗ySz0ʻ}>MeFhCH(fAE+An&ՆS⥢Ya96S&aϵԺk+o&XUk+14!6Rd<N4z(xoc%¸]߁jYct-n;gL+bIie&P <AghX^k1hFDJRiiUڶζUD%qO,ѥ9w*dc$'sjA2f& 1 EAݍ:0#!6'[֑aCA1HH/E"h 4#۷z!%X» %pDMăv b6BW iVe<{C*oI(+r*)BRT{N)rly(x%^xZ Ӹ8@11ИUds]h0F>qxĨt # A> 4BI%&_o^dCUJ-d %Sc|;D> 鼹YUuPLΛkS 9ՑJj &MIl13 |1Ú4Mݜ|OZ#vݺ{cGZG7H}m fm\9DsUToHQͺ}!XKݷn(m|i"%^Ok;k^Vj!G]чΨIBNF` :`֞JO) ;*D{%ZGviEO ӝ6*˂Bф i>OL9'X rS1S3PQCm>+h-a&h;o~ɸl5 "ˮFPyU7J)|Hbucm̭Ikq+YgeV\ZY&n?K+5;xXUAWM{'XPYlTj#m*") ٻޠ\U6S6);[NԬjځr'픫@L`6(:Ӵ xiNs'`ΫmO J+YҘ;搛!ՠPw^KC 2P(SP"(w&.XY>g nQ1uKs5tPkJ uF.Cx=T^Pf ɦI{X v-VBJAQ"Ł.0͑#`Pn֞u*%dNB4KTK]d-Uu& Eyhp秂 f1J%)t ueFjuD$J(k! e 1P =(aL! S=D &23>/bA\M.!9Yh>3(ԃ,0F!NH21`d%nvk{U0Y*f=f~ ;m&B6SL`-$ >:gAuP<>o:@G6^&W$];M1[Zm\U&1:#'`)rE ]; ) >@(E&rZsȼb|b\ti?X0Ft/1 JWLd:[nmG⭐2p /=U]UBNu~d}:MOU;Rݙ@vTVW QH$O7;By'kWB,U)W`>^b= eDA!D۠mAJ<.J0.#TDH HnLsP0FϚ%@rFE Y)֎ a<Qy@oBR @XDjFH+S/NZ5^Q0 td+|)5vE1YIbeUUPZ|MC@;y2 5\oF?ʰTV( Aw8+NM )ضYW j-U4yt֞Ewv4YT j@eV3 o=JPkM[SW``!-C퐄M٤$@ݢ>~Vsp ?qjhʾ0'R"=]' Q@,%]%G-gVj3{>);mBޡ\uˠ˵ c}Ђs{u нtNa]SkZMD^UhOV#0avQD(nld+NvbF+FOW̩ ]!]Dn L4HyD^4BW OpR\y Tap ?? -"a(t>u!c'o+(te-JWrikWBW_rNDWK]֮B:Bʝjb0 ]1ZNW2 ]#]j8WaAFKWW2]#]yG+) CW^.\*Q˼ +eό?#sju.uv?qowO=or4BDS;Iss!vw/"Ygkx pAËxp\3c=FKnJ(+h,4]10:9MnZQ:ytLP4Җt/~&apK{ʤ0t8TYA~C.^Ըо!=at(tC3 0tp_:NWri]}263]10]1G+/ua(:BN}KWld/0> ]!]AV{+ CW hb/2Qy 1xYǺyfsA`tDl%[뻛?lЇxn }釓G2FNVzuy?"_q=Sj^拿n]䱣}ʯ/HĽۑpvnko7.ݬ^>@o;[mz!'_'N&k@oiLrz6N62Sn)w2&:~bK3巃3;.՛oL?Viv_Vw̱3N߼z{fwzf5&GNKo';B{?cF(R hK5Fi11J5ұq--eu ˽Jg6m*̺vmIeB}9Iz3n`gypih6ĥFHtHڞĀn֛?g;W~]]%Jd@tŀ82QZYUzЧS[]8՟%'f;Fn, v)`w.bL^۞VK$ e$UlUvJ"GZZNec$-pe\ZT)IEAB++WV UW+ \`պ\Z+ !NWPUp7l'Rpj;PNWB+MAq|JWB+T+dq*pu p~z;ږ+TkMq*)3NW0KrA cbJO <WW+LYzTQy r99J Je[oy1gkǃKhlUe$ تƙqC1kJPKF P)ǓC(P+iQ%Oѓ3]Rtˋ IGQR O+k)%`]δ5LZA{*4J^OsrK[WcOnVWU>W=R3]PqWд{4Zw\JF;$`}ܓ[=UkBWW z-\ B SŊ5\ZFlq*{5v5p% $\`ʱP` Jw\FՀ"F)^@& bƮP-uu4^JsI9ӊqX4)kL`[}0jy`T٫m$L譤=;Ojw{Np}>OU4TJQR UAs^(ROTz?*?-5p(W(תRpj5}d׃+gs˜j%XI}d\kqqJ&8j,^puhX߫\\`eղ\Zc+Ti퀫tTp_|C\\!JUBV :A\qN)ɺyʾ *U P=PNWsXAB9([Vkeq*NWkBEAbpr/WV UW+%puYA3vri1CVDrK`ے@03OrR'jyPeVܫxrFh)K IJbpru1QVw\!$$qe%i%MYIP *] P89\}5b{=;'RpMV_cl7G䲝Zy$gJճ(WlաE_oa  >v Z.K= v*S`Zp--W WQR P-}PWWL3QP SuIPw\JA\ RBp_qr/ƺB dpOWRYۃ[քrpr-WV~U:E\)$-ifk[ @"\Z+T8m yRKK )lc`@e]T]]vU*o %#)5!26x%g)W)go\PDhQ>C邌(H9aP(j黱*0QwƎƊ p\\fJ}i UZ3qe {c+h9q(Rpjo]JE\}5{=?'FP{|\G^NvlR-p\^Č&ؙR*ų7a7pe~͛;778P܍TLf>JKT2UvTEZyX|gQaM'Q^ssda!y:ܜۿ~?\#3zeFW77d돋aQp/nGI#ԚR$J>f%,'J)he=%Yyb_+jӟޜM ߼9,McWZBsv;MIŃ[^C^ '\~wX3gH>7J&m ܸXz?/|;5~ԕo]goߏ!A7-m ny ;]?`h=?9ޘ!_YãE\J^fGݣo~//}-;xvz~uv;{esŜ`DbĮS/k9X(~JǮQ^uM/em,.Yj=]^\P O/%z70hL.Mftٯ楳+p-l9^9>4q"~dRПPE;-ul,Woje*&@81%$ )A$Lx0XPيrR hagLˁj 1 242ۈ-K09S1w=՞Jb,;A]`[-#y^&N' J#~@Y?>a>Y~OVFn7*P;V^Zޜ|m|>&w5^a8ya ky`T3S3= U9Bx: o|h4ޥ5]p}վ&6+jGъ}loH g߱Avn9cpnЗ9xTAEZ%lCw*a!Q7@u-E-Q9!)&Gg뼧[:5TЅyI?Kf=pJk)bxF٧4>sa~!4~gg? / |΢_?'ofCӛ]!j~CԿFnB,G4kV}`j}֑,->&8K>ܙ>t`R$Vx184V'/%G6T/#r9H Ŝtgr 6GGβ&Bz# !9Q*eJ{ uAl$8.%Ku),Y[UYd% 5Ƅ̼i ]u&8 t}S0Vq˯=ƕ?>@pJG{p7,NUc zai~i| &LLf/o&w`^z'ЁTM 6eN9+2/`^{)R¼F1A ( ^deSJɘTkMɌ"4NKKI 1PRrR!ehu͙ߢee}гϳ0Ͷ&F׳WN`00e~qM2("7F5Xzĉ\a׀ΞpRb+JDl|;blf&aU)Ma]y~2 /vJm1@U[Tڽ⸶{ɅAqs|<,sޱny䍤ؙ=JP8[E.iZr(i\.!XP`eYJ-ˊ;l\%ᴥY1_aWPyxa/ûw яG~4JNhJy)'cN m'xB5V3b"dzB%]$D'6DV>Wy%02iNm|mߏ2Zub0A\̚)mu n aT74+J6$N;~HfApƘ TyJ·[];jX&8DaData"83˿è-CZ1U SBHAgm.Ă@y%;,KVU 0w9a\@콲q& Y]\4 R.p?]Q#©*) 'sq˺nM lO:}{w0ziTX.Ύa!Ș$sI: Uy"Q gS4q6g/Ѳ/)ٸZy$E4~eʁ n3[EUΧ 8$ۂ!(1֠j 8^W/,W-?-d$c܀8G` e_ {(B2=DU'֊8R'rn }A0wV  /l< 1+M@@wH*vm4SXu8ÑWqD @_ON gTdJ:d[PR} &ms@/Zs!B%BDاT Mńt^yCDŽ N˼roA+nDނk{yQ2G/2aASJo,%!QW&Ki$E'Lr PߞOrbMbB`Nf zcN˵HgorT5[2<:GN:he(KGgv^L:XΎr?1[OwwMT9="1YlB͎BXj`}>D4$Coe+KQ9`sDNjV -k4F+ o,%`P+'?i1H*6VA)GՒئ զ ʿչbF/ -Iq 8:;MubF`}hVU|?ީ ]jB՘)U"krzJ2Ȋw&i.""};;m=rtCM XK;j!ceJ2:̭K꜓*XK(w]Op`3AO$Ul krIүSR@ldJ6qw~j8]<§.i( :ȁ:0ԥtLU01;}nv^lX!ElB%щi+fhBB,λjW,$>%\76f"UM4TiZEUv䔩C3 ֝M-4&v!9n)3du5;~: G˗n'BNj2nE/W&Ǯ|<:h׊CC9d}Lw]+Vkw(؄q'?00 f8 %T [&kإ@%Ŕ,f0*u6jԡ*oo9!@%6+%e)b`B]pPaDŽX~utEBWscQ]8ovEmsV&! cs̆* +:L)غk)zKjT;,h@YDv ؒ]|hk UJW}L*],UuČ7mmOꁢ|[[qm[ HARQ&fCNtG(~İt9/ׅ6KƐb}n\c<E)S'.́c,ʻʆQW&9] 9"x#47|B5TvZn]F>Q&QtKqC%9/x&ۅu韋XQnD *ѩ23̔U唸EX<$RX(o"bjQ#b*ʡ֊I5n EKiXAzH6q5(7v6Y^M!dP k#6qӘryڝ*y3ϫx5ExBmB/۷-mfE|*YU ,8.~iSd2R<1?0Kgò~of7 C#J7bN;Gƀ{qtioxLJ;?ד28Kq%ro.X‹O ymbպ@o.^ߏc2\I\+FtbW X: a+swJسCT؜i}⇳W-AkEgE=;<[.lql ..!/Jd]oKhoo騯ߌfN_D /I`{͊/?m|u|mU[վVV]lְn-av\gM]̎7Y볛qS9Ytӯ폯ӏ/?~-h~O?<,.{}]wVgM_дiio۴CF9K~! ]ĵ3;6x-@Ƽ|rCT^m—VsTm6.HȄ6!-ֽ^Y Zx-@,/H?`XT з9ǶfQwGiS[w#E͉QyLRJ6B| 1ib ښ .'Gz`V卐b 5 ŘYԹ![F^)֜Tٳ#RF@9LLk:錊Pc9j%r~;ؠ}W#c.R(P!+ԏt# [~~_g0xvXXŬ9'Fg^ e V5A iʅ4O7Ҙ@<ݎleA=R]Lk~_ l.ZXWPǜDl4P@<u]R4“h6W|xR ߏbNJcvE{`GĔr-#2P-Tܖֆ- ۾RqUV!BqδhQ%p t8]#BC֝GSI]wzIY|X 1z(+M;3ŇmS:^~'6X67'k&2 }}5ifs12yLI]4)aLWL[<"L Djq1$U|D bT+9Ogżm޿: k^+Rj80ͪ 3b CCJ1:]Ĭ)NR?ԏn4BТGfwC=yra{_ɁJ EZ_2o\IT5JKsU]oGWrG@>\nbc S$CRU LjlI$rWU1X'1(\S t j|[AqW+騡1L1V#G"4:b*BV'heՍ;fz ƛ'XѰUDdtӇ BQ1o\ln?֗ u(˅:M`z_&'*NFӏ:s̜&mU%?_] ֫&Z a hBqN}L^{ǭcofŌ\ ܠ$)uλ:& ŌMڔ'MҮlU/^\pF?q*nMOv,YTU ͝峷oq7Wn]بoL2Ϧ mgq }B8 /0? @x40h*G2&[L?U#u@x2hz%z>VY b? ٬>7`yN=oMMV:uݨmVq%B ׇ0 _ҏUMg}_|}UJS42 J&() WJT  Wx98kuOoF)Xkͷ]vW^|{+T##Zk4'߷` Rqye9,X?C4^ Tad(~]c4/7^0^ʪ*fJ/U-_&Ŷ?85WGD@EuVݍmi![-&&غt C\ɛ"Dj/?qwi_@^皲x3%pi:bʍi@Qc*:<6ni379ljg`(-@&Ϛ jދ2N[g|7mZL{ZVM$w`lhnj7VMܙwAZx""Yճ|+OrfRD#F"J5XEC0@{(J@Au}QŇ֜X="U]NWͱIQ»~t9K u #hL-7VD - œ J!\B_ Jߝ?Ր=y 6ŏ%ȝq蒐' mľ{+z-&YX+Iܛ_@ ?#z}](:3&nN׍N&y ^J+uuZhV:s&M7 %AmM-TD2-"|P ;Kף"N\R".=U8]˕I@mb3,Z8rfkTR:xbW:qSL uxޟ/TXQ JnnˡԧU9Rxv-mw3V(l ,OuǺ4vU\ɕxLHFd.x&n,} ,K=%*ZMoMa])83sNN|׎V,|Js 70*@RV@&t,мKLJ $z0Z`:}-SOMNJJWCJTc `V-{15CoGP8L*F(7?jAg1EXI_.IgNP}6խeӇߑ!*,ewXvrξTKŲ:|t2e/p2J`9 ]f[1K)A9K)&+jw.UK_]:\xU}޽G-kFe B`'DH-ǒ2y PS*"WViΚ%`k C)F: lp6HF7g)Oc.*5X@U6slt(&=i͔k).886ŜWÜ <_]lI]fKv)_>qҧH9vInYA .tw\b2UV{WךЏG}=ޯO5ohnK\<@/n],dBq`jZu~U3>@Sr|_9)o$Wz]Nny78] KJgw˖O2 1mTn0oR; 7"݂ZڂZ 8R=  R-X|QfJWU8ҒF%0u8`39 8ȆdX] ;x2׮ՙ3 =-q@Q V}$ 5s%E^KgJcqD`J#X ]2#B &R/5eDDLt*5OQ4`pHFNl~r@8vƍ?Oa>{Zfj;(j>pu$"hiE;Ryÿ2UT&i#jod1R=Wu00YE+1p9}r]v2e"m.h\T]yYkJϭe—>rZ2KmieT nzvmazhvcDz'czN -RPX&p8p)u6H1B-P0rHRIpu`IPc1>% (59Q}7},ڨ=C@N%}0Gv =/(.&բ.ψpbf0” E#8#IAÅ@ `Yr&LIX5Fm2(ik pz5,d#[69$Ia/\R"8!1/"jXuDHs $B AV[6D7I Mš_PLsU RYU4c, "|lDM7TLtXT,Wc)4qdS1K,(pǑ{a%V"h=>ed,e &G!m%g8@ e5"O&D"?⊉Z=Dc6 B1$~ziL>ӹs\jLHi1U%Ȕ`&$bf}s v%(ތĚR>*@I]Jd:g2LG%y}`e;d Iq ( e5B F7hi9IL"gt!2! K?kÖy@&HAV[ࡴ)/e6&rt)l%y1.,8.%̐QchGMrOrvj#Ib1XʜcXԚ9 W eR0%ȟ3UNQYd3P"4s61hQf_ET,9嬒?U::Ĩ[ g>bō;Ns"1P3Q^)VE !"S>=ou=)L$; (:09H& *7#lpv)ٛ]>JA7,j,2XT#qVk3Jl#\yxGőEqLrXaU:Ъ4[=ٷRO\\kRgd kC6͍0*0dC$4:c8W nʁZa(m<"@$6,P/*|<#:NF/(JIGcp ͈sVc&zH:3n4r(ґCvqO񢜠Ӵ:38zlNt#*  ظGA x %:Uɰ&J& ##޹b; ,X"%ZZ2":1`gULPB3k9s2輞o Rܨh=N;[B7)9WoR0#4'(xAcf(uej|HPg9jgPfGi_H=4n}v>=?kv&3A>^U݃n.hI(f+k_FelwmYxҞE@hnSl-ksU˔7ϐ%Ŧ%993s93"bn|Dy1HkGtL-_^+K>Ebx#р<5wj),Kr-il eF3Elɭ8 K-%AƌHh)y n: lg5y=\eP;rپGG*#R؁FǤ"]0(SQiz)O#Ay- c H7b mf} )ŜH280N8h8+b x7?^LU kF49 N206je&𖜅HeEʠb x!4#HDKAhDpĄ+l6#m-ȸ0W,UJSck|&$=59n96l>7<ԍ?{=|ZKwঋ)8ĔP2=V,7A4~pfgv?(b٭ĐigjdGX"6dI19s)($\d ( t^ߛU`VEP H ˥CRb.؂\wgu4B1Mc,N/g&+rw2d#S DȳՌݽrG̈ZKJAt#Nbzѵqx3u0>[¨7i֧tVO[kܢlzQ_8Yfa1W]E5T/ves$+/iC%.|sKWM͐fklfu. F0bZŇf>Гet[nl;jS_A-Z]00إA0dØ%qup6WqR- Eu;sL'_~쏟^8gO^VLAfD~8 ?@<#m5 ͛4%m}UmvݻrI,پ[%Qz2{]'UXݹ~yZfu)<#@6U'?%_/'Q4@Lr0/@x ~H'P4تL3ݬ#=+&_dH!ŬZXLV1Z, sCDP"Yb(|ăD:7",CQ)*8wva-c َFQhv &A89ե z+zs)0B)np09sH[G‰uXTg e% ^,.mz͊rEC/qb6 yx໷ewi빟-y+q`k#9o8 1qV{ sWL#,8ՠ  L&^_]l- g_Jytj eo/BM\Z"šC Altd<{{ozջy]\÷EIoXhqpt kPSNƼ`WZ75iqL}<4k[/_bŔ^AT>X[e AOUghH#Pgh(h(Q+ɡ'JT*%  i (O^wÜW]u^E7m,9B`"yJ w*#E>KDCAE͂RΊ(L2i*Xmt ϸ>"ѭnF,B0#BXY Rm"bEʻ+Cʘt˧RѸ]qegs(Z̦Uz{juzB;Xi%7 ApRRhሓ:dytϣJfihTUJy-Xhm 4wF,$rDtR KAK[#g7oY~|< 2漺zogu8&zUKq70_Mp`#E/tdz"/p| i4,5!u/Zdc~#ԏyCwvSeUq;kAV0#)6gF\$dM0&;H>O}iiZ琛&`k U cɤF: lpB^ ).*51V.7M;i] KЮAm>Ғ֑/Y24HD [A>g4X3ԓR=R$LeTx "'̨`4 aSQ%/3(QOIqB(9VH+Ɖ(G9qHqTQ( Ȣ`;X+\$JGƶwZ#gnH|MBӾ\2b=SW.$-]AO15FZk6՟bnncҡB`s+,cQit/]]®M !([.R6,XP@ZXH0 .C22E eZ30{-#chp/-3p%`YL7a rfN[G7}.+K5ήߣwKaW3zQu _p4-qHw"r+iUhD]>2 vH bx=H玉t^n-u?OѼslW6wj^n5{^J!jo~wk="kYW:e-M_uR,vMyEw'\n/}ZO.~)*PfH[:ݼMݜDDB+6ya: d6#'~z0j}\j /)EI@(Kd68%j9MgKu^m…*.$UEg)eܻˡuumleV] bXq,wl0ZZ]*~V*w~+ݬ BLgbR 4SVY'-K[ fcv3k͇z#o%_M3(l5>\/Gmz͝'9q:rа+mHe/Lꋀ?$7 .bD S$CR>g%j$ZCِ3p(0|+m0nEK+̣t'bftp- Rr*ǂ  3 ]D$90ߞ8 ʓp%+7CY Az@p\R8PIx),$Ι4BG$D(ÈJd|Is<4h>C!Q It"8- jk.$Q)Zqzu|s'=;G}Q\loggg͸!~χnpa8ͿqF"&Q*Pr W$!ODN/X*J1.0EB,0dcFCRXM.$LP m;Mb`"|L+ݶt[܎n< +&澠vkܱ-jQP@QpL܃@˕ B0zE5:$$JӂZȨkB$^ȴ8"F BBr=Q]{ak܎QHPq_5u{D< b}.AD! 8ieRg#΢Vs7.o7hJTqIRQz Kg\p^lFII@q(xv +͖zm dGwXF=鳋J8LJe*e}!2NtqڳtMA\-Xt+ڤO{=z\_nd/R4\.5mt G˔KC w>j8.ʘqؔYK5y70J(';!%9wlSw6G<ơ$rUBcZ|m٘S@ zr$7|~0:gy!/؆ [rk]TLl "E~#JgSFiep݅]._{4NS fh99"vڿP;u\\z=rO竊5ĀzI̙8[n.veqGKva=Bz't kFCc78Ѹ'rCbJ]Gsp)@~^ f>g*ݝŬ +\G+H6Y2]5QRT J/B4;%4+ >OB%Rܚ]x]/n.kwn+1dY~ι!%e2:JE I$bd$vb:oٗxb6rvgg)kCgkԪSAvx)0.Tu"霄w"H!uq™Nb"`NE}FciJ+tW!F36:m3@H!vF JQG 4kh@x RpOw9lRe.i4Ľ0(bw-is/OpNTAEqy!c~Twvn40ji2_.9n1|c.4|<']jnm 3_\NFθd b PHՓJ(~OQRR%+ EKT"|&B'7 YhNG^|ҹڏB{qibK<; 4њ!.y|"ȐxL˅;䴑3ֽ.tG}Mc۩kܾ[t]ECǞO{9k;^Lq >dJϼ2SJ}Dʽ0t PT1lT$52J1hEDFՁ*Ř[*F|Or,xL2Ng6\fs{Qo$o/cʕN~ͧ8IW>/Y;TciJ!]||Z\Q}umB8L8+wxЩ+eS_F刜kpP'9uʭ" uڜ'oϤWi?#[:ˣ?V>\A>8NWǯWW6n_q/l"-*} f6wqDX+Wulh\z|l˘w9ܝ.`^*#rk}[p`|Mx#V5 !j>vlew֝pE:|H_i#EwxNڵE(Y!Kjɜ!dS(B\1)g bpzM^Xu'UnYzurOz7JƷ)Q$͛4-.fg Sƕ~РFd_W;u,vlԁdj3>DPWs N ҜC(DR:rR|44EvڄJ%Q!W-P,JDI#b,r ]ZflG졻;ukR܀|3~=d | XSmX">PřNq2M тԿ7?d/|a2:+Lʡ|! ]\zv?%͎6) K}ř_zV /jyv4'g}_0asըN#~7͛F/-;t:t^i/_gWpؾOu/,-JLpQtu3ud 886%b ~VCmU*p؇$[x$)1#&$9霓V%!k>m]ΡZ/ Js"R D^dHɻ"uI2 ^ojlh8;X_\M´v7_n-ښ;wp k[IW[[oc֘}1nz1 >/>ҭ{Bnm)J]mgT9?Ԑt3Mq~Wwrw}߰>a7=oFw9ʕvX?vBwyFۻEnxQ,v`]<6\Yn/0JlNgC:9E̽Cb%rRGJ%86i2q?As7.Iu&ar=t Y /B3duj!; #[w^"DVS ReO*\IP2Lb[k\)1~|{hߥ{\WˣvAB =.J &m(Z+ fH f+-IP|̦濍T18"!d9'066fl7.qvSyz?t fz9ӆfO@ MmuFh0DyvEd,ހ nLohr*%n&>Ƣ+)"6TTzrĈƥR!U Kr PIaff /M>"5r'wL@Ͼx/x<<>]re$!HjKIJ;Q(LV[b8)K.DdPB~ 5 I M,nS[-Opl'>%"֬Vl4H+!xfڱ`7i|5Ϸt**%8먤S.`vM+KX[m xyq7}Lzk#Q|;7GoPE |֚:v>v"O n䡖dQO퓍o( %2dI!hD)Y +RtH'+0i cAa :B!J?21Oxʣ$-?+NyFHg04? g2ֽߋk7UlHM?vKQ Ex2 W]kSEo1b LGC zZ # A,*8ew9I!j*DTN1԰"!wts,KKJR*\eWZ`[O4Y,x44}X/ uvh8#- y_7c $ҺF2*U \&&#e`R (-#ӶQ!0HN>tW IrFFF~ĒXQj3qvn@y`O.\kgUs'_|n myzņvлDtϔ%)$(' ] xS(&Isj@%v+F7rl,W!9QHM>Id1Ȩ2vYtle2Lei56 cwzsBpڪ ɠ!xtQ/od3KRD_RIX:# (/}p>{r"ph9$pJrK,(TZ"R%vl"2F6'mhb9&*oŬu^)(=#:b&=%d ;_ Ko'ŒOkV@/d1"$gIfEDo%C2KUx P;,v+ BTb%H.I3!,Nb%%Fn0΂(A;B"gH18VWB2sn(knlgz ̬WɻMʧO%h-O-Rh)֗('=[T f Q'HL |z-p;c ibLY(@j !X4Y*#F(9Uj(X[GrZ͠Vq'=c}iTu Fc%EjnFb |*8WIzpM,_]_Jf:gLvvZV!v4m-1L&CBA\Jf/1c[V?kxF:Kˈl!޵5q#2=[g*?$7٪dTlWl WkjHs(_?!EIFiK$@wh| ]u" XG[NF/(JIGcp ͈sVc&zܕhow(+c,wٗ}8dW$jL 3#;.6WEHd:ܐʐH'ˀ>䉂MhQ"],s3q M ~A`8-r͙A6ƒEmC)!8k &ο8"`8Q&'ޙ'GLCP &s6^*@pd^O٩=G9@qTQ&c#XkZ 5@gm]B&ڥ[;;ݥ=[Kﳃ?}"͋`o<UJ\̿UsL?u %:Y9aMsMMEG vHTw`1U{X(X2ZZ2":0b85ګ.v! 4gre |h*ōJ1c*h%{Юx!% 3Bs4vl vFyejO\ϖZ=}O.o7aqZQ5DzՀ׍^}{$; Dv Xc|0AdO?"{"sB`D.컺JTJޫgRW@&`U"wsubURY+_`O0('ս&0;^s~+Ƿ!8p_3h!XG'{|zMCcc abbXbVV޹P\3.IQT{VվflŋAꝀj3)Xu2PEr}y (.CW^V* -(|?]sdfoTNv7w2 ycNrLr`II1`D<]Dn7nxvTau@*DDﻺJTٺz>J+G]DZQW|˩O ދ`NӪQ+PWWWN=_K<7i'U{s8yVb^\bB̦8ZfB>?jd=HGQΌǹUF2!q=$̏۝1F8qŲǰ.[ 6FxmC0V>:! gЁ Q0M0,BHk(/oPMzneܻﮆ7A7rDNGƋxnʳ_ƽOC(me>1#̚@2oYVgn_f{4փiRq<8,:v򎴔kGelΐ#32sc'Z;^+ i}G5tj)X0VuIQҦ}&}@keZh.`ѹb6Ba$1) /-&_]^9>/nҭt{NWFP}ijLRFRy/ !,#<ٴF T*'ԕqƳ3`ǿ?L_}?_}5:xatV6IAn4hZv4l*EӔtfiWuvoKb̎j ғ׋qYg^VX "rլQB/0u b~**y*f֊*w+I:(Wz`S;Um8ձGW{HqQ^3 U1k"je̒=g:D%RD"·@LLj9+j$շ S-Z^ga IM/&ZTl!+4}: Erɢj݇cdoФ=M~*Z\D}V^7^ɌWmںu %wװ/l_|qr>׿,h!3=ϯ<]C[yJaў^{7DD b FрG!eLDӍnng/D~ +l>qUgs(R߹Xe5kg;YXGXWFgB;Xi%̅ 8q F8⤶c<(8YRq;TRF1ZleF͝QKlMQ`:`) NPBt㝑soInr B $ǂ1 LJn-"* HjD7?\-Jdt݌|LvM,P_݃WqH&d⿓!z91`Gp9 s074KS<^#`ɻ퀞 ;$ioMtҟcnU>߸19%\:)5L0e!Jh fiE)3A("@.ŢW<(|q{Ӫzzi&Aj'rj9M]2SD$ Ty:<\7nZjn('/8D&JxS)Iy!:|2Ǐi1-cw4,^w Rf3vfno^*iZdJT M9R yfE.n_ݾME004'٫ŴY|Yd7Ar(qGFI'#774ᚽ!5 G-ٰtz40ab~0 ߦf6|#Fc_iUWt~zCꇯj?~&kP,7OC_>Q*]NL7PxC>*AgƳ!z@8(Ϣz8YƧgv: ̦"i.7J7TUR M'Wa1|p3YAŎ8LQóB1,|n0AΞuq2e :߼ywux|< ~J2WU..>{"K/-(LɨrP~:yk,Y MOُ6t<=Od-O#ћ7qXwNa-bxSAw\с| l|yej`.uz,c]QC^i(ʤ^BGHEҊ!Oս\mD_k)h<Kes)b:^n_@of2E1^LS-Sx$4z-5ZkbEyGW<^O p륣DPSI!`u5g-ԔWh4wTevԘֆfUVU${bo7t bߣu_ǧG/<\6;"r4nlL)"&9&R: 4^ e@}T5XkybCsdDn^ԀۊtW~Dz&-IɠBQ:=-q)Vxar@HӨ>&v 7Eg>l-׋턴NLpԿ#g>d^H=k&AwTM uFTm6:RN7sx 9`<(m`*KU(DfkWL@Jբ5ba'1ZCBUF,C1KV}!TЮBuJu T Wug4N4^UCt(%3h9E6U|նf+`c{ /s`o~ j..zi*vyk;6go?{V%zuߞk6^f\z ~˫5GjӤ.cB ;ZC~Rm Mab;{Xݱ,JɝOȽv=ޯٛ+oy>_9uyݯs}=^qnbjnns_jnh]n~GXM)r>s7pW{ZFн!]<7sl^,HδS~~|*5E/iOut~k u=/'|?'-qd]\oDmgk~WWȡĘWgK~=r^cfLi 0l&!(;F0c0OA0$]U0d(\)*X+d_*x$+jDSTSSwqS$ 7NWۑG %@1+2WWLDѢUZRl;oWQvb5K4I,F⍈D1f^QloMQ\Xt ;G7=y9r~ɳ+﯈EeMtV(@`LJ-@Zsp aseVm6-jS%BkTd/ig3*|a7x/ξ&_x7GW (^9`ٛ}n(T@p5Yӽ#3x VG=Zqg!.j+ Du,EZQyk`yIl`9T]$M2_C1Y RU UȼWѹےl6T)XzSQ>K)_ ߅P7L%T.'M5&}ab-AU>R\"CX)9Vڈ{ h\R>خ1,>i54goL YC˾BhjD=ܴ^V+Wt%ڄNak?t@PV  ' Z hV_ fVH ڎ}V_ro곤r;S/X[8Ł 8^v~_F/F/1|3})p%VCQ+ 2]d$kyK.ʎ%p!Jcn"ޭsX.c" ]w-9"{o/CmQT,:Ɗ|1u1,t #뢃5M-Ncm#6'fݾG/Ƃ';{1f1!ZBbj D$G%S3YSW;FMz(rmTW A~@QR<ɒ^vnhgKloST9<-diAUFFjj.kyώLLP&4,'i_;RLTpQ5 $Tjg=AnP+]&4'S|stUP#'ad t}mH]C**We)+b'q,h҉؎dr$m4H;jUj|2ЦE%ۢCYIGEdm""" M^x&jRv6M9ESJjS\hu-kYய&h7`lT񩛉5ꊎtB69ZURu5Ҙ 1{&+d'+,dC,YUDnMC-_X&#u T Wn|MxS3⟝t ˫)Nu<ϵEp>> 齡){nq5znRo{G!&4ڼT:MxH6;È(YmR*Ĩ31dUiqH bv9FC&d-0RA1W59|^3'E.Bt*{z/q+cBa mϝMEzօ5| ~ffjwc9 `VڷĀWOWш{FeM@ϦUV`s)cm{k҆,cED3rW"9vaxGwz.ٻ6+x7F~aļFjvbPˎfwi$DymIhH@Uu*O*BwNLW/ J[fCW녛 ]mbYZt ڪ``h~Z||e!7[P["v=` @wtu~kEooإTKJqžwѺh3mVwXJ\hhyUI(b~4Mg]ã0싳t\m؂d ɋ㺏Z߸p,Xvh]Q=~ie٘^_g5L/԰ }QG4'Qz5R VtӞ!xϫٯ''emo!}ջ~fMr٤KSD\f+r1*5z 3F3jduv.Z4)eL/錳kQ瑑Q_FN0~VZ9b?oĎ U,9Z-2zk%l$AVQ{ juh4WQ.-#p7GL %' 1cHZ_ӗ(ؒ2V6&]5YYU.GMfdq+2i،aR?~軁r=iOV?WNqm.TOew5zm1Yڙ첕IXm$Mid1XI5rFB"U@qg}ʹRs[EEu<;:[ZH+NpA o-UAdDQҹ m?ڤB3j]^e9E͎6ME :Fme1$X,nmYiR o:CIMMԄ1bN"4=XBuؕr mq6Eu:22#QFB%$kπpODKG?4YV1+!h9Ghv:FiD9w gT9=@.\T{}2ޡW"NG(g@hFI}B9~rP~d|HEb5Zڋd3<%cNօM,ix_O͹p iVE8USI{k%xN4dhc9\2'%d >}oaJ>,(;z7Eע$S&mI!a#ӱH }fN^DCU>)Nq`1zҼXt):Q\ЇF.\>E=uk,R`榤g -Iukg]!(dGhOM6deGގ`f#_fH ָhS5>#ٽ3Q=ŤuZFKtPK|̡cLGuu $$5(ڽ,C)9iD*ZuIBv 䁀:赈JG[|=6$BjF,F,Z{65^ PM=-;{ڏEQIIb&:_BЍJL T "Q8TDzX;PaRB]$1#dlDH!(ƆrĪMPk>Zl?vXIwH'Ytg#MGhTf%ݢZzVE*¯w ,e4 %0lӨ{tВ&ZJmm`=A;;W@ea9<=;\+uT$K+A[.n[`3 .-zLn> ;-,F ݚg)DzI(ΓDCkBm1&Pz||4l}IeጀEYf1iJvP%(H Ш3) @)w^;ZT}f=*-+P!>׮ow`E^QH"NdviP'7 h!g0Eʟ&oQ^1bp0) ,*1"@qƢb$ӽİY;p`\t%i#*`hAgМۦwZ,̸R'5֤Y 6)jP%HI21Kd0Zv!; tsrgރ v[j-z5zC-*mP zxq Vp*-]QZ4h䋞P+Bbx~)AH Gk|T('=zMW:PCX2\b$*ukE!8XoCnҘM5Uvi J.ȎYxЬ$Dsi@fƩ %:)jI-, mT'F358F'cAA(YE <뵸y:ڎ.qU'`ͥ1:-4[]))ɴPw߽| %e+/ 5iYz=(ޖ]ٴ ;6s2 kcYXS50~ 3&0 ul~ fN  N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN (zNQuf@|uz@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N ?''G7'P@gZv ֳ:0 fv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';^(H\3#'-q\;'аNPNے@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nti=Bޥ^{wLKM~_ݥz^w(g@4=Sq [Ws۬&0.%бqn }>85#;oZ̆ϒк?qPtЕ{л}a_8_g}7]=3_ǯCWCnѕ{]94*9ѕEEq6tEpM ]ZtE(wM]1]}RyfDW ]\BW6]+B0]@VgDW,|A\ h?T^"]jNŠ/̆s+BPړA/BW1x\gduLAd@C+5]]oP`%&-i xw|?ψzCu{iY+N*S}[Q2ڗʉb%込\*-٬uMGTKЗ~wB]y$|H`mMB~oVG_h*w+OzOm&-n>IU8 < / Q8eP!mxmVZ/6|W;}xNAOTr\jJ =`nŻY;Kujhwa}tg_]SDvrlxݘkSMLs]γM3|ᗌϛgii5uVF\?FhcuD'^Vs73+g+5ͅɝQ0]DBx}WaˇfA[cy %쁘=Xٲ0 u 6[-YVRگ[+z]Ja!G/<?vK-\6*p_WGb21m00-V˖hj.3]e3qꀃߏ jq 1a\fW8"J:]J^ ]EE03+_1]02@v_]Jz/Oz/t8+?zZP+#3]=uwV;fCWW͆qNWIHWZ/[]n#G;KK%;{01ޗq!٤$,%HĪf*Ve8y"*23tU*hNWҶ-vջ0iU1+`j -bPr1ҕbFw*ve;tUUAk[ kY=]]i.4t)v悬3tEpIvv"f-ЕahSvȄ3;%^zk|e0l2J˼`z\CբTVy@au?wXd6p§ 8[D^U%TlXrw_KE\=r_a\|-٫!WV^y(&W~FE#qQ8uU]` "p;)h&n( o!"U+EW誠UE_] O"j7t\{`gp'vC)Z6yw+jߦ 9vUDW誠}{螮TX2Vv \茺*hUUA{uut%,<`+AnhE骠HJ /?m}ӇoJUphš/6J,\x5J/ґ(.zzo?aw/{"uJDC\t :2tʼnddJYdR|osFn.?~$;+?\ cCƩ+tVLU>YWy JFrQ( (}&J۔:`E'~*ᏓXϓ|./aLR˻;Zdw?ŪB}1+:Eoѣjlxu ms`ɐ6*Û-)nEO4o aboU4Vtjo>fȯV.rØ؍Rd4ų!3Ѩ/&OtW"+k7_NWxiFRʲPA匨 s7?Cr@WJAp]0YO?;M1<[?trAڲNfi_Ja]SrFjIyk\Hg37g+t)G %PYGm\ۊignT^ӽdާ{OǷZZҝ:f88&zN~Y4!_'7p!*Cm͍랦cc!u8KS_biFEZF(O'Iѧ"Ot^Q7ౌI)ܿrst'~674܌כK,>g\zG>Gnw̝}7|t?xCՇfk 6J{@ 2ΚO՚B[d:U#7NR?zA~WMr*;繗9Ґ8Ĩx־Sԏ}J2lIAFu'I KoL,\63s)E Vbϔ҉GK%H@PR%e4(Aj@$ge62u.3lAcp{PԻPۗ+0R(2$I9o+H1HpDFxFW3Rխ XǚV;^;iKͅ$hIw1.+)ew }VT.f?U 'noug{Yje~KwUI`j뵞{бY NW6Lф[u\ۯm~²=p64JEC^tY 9rG(B_< >̂\X*5L8 H#t"S)qzBVoX6yT0|Y#[ |eڠ K=@4UNQ[oQ-ŗVEģc$/Ydз%$>{!1tJhiwAO4mf$[v%2{m }C! Xbxpx\V/er#z.iZh.HCA@] ke;/*<`RkSﵘzջ-&Y'@A(bp99CN\d2!\Ed袱&ˠpɧlTIrAKe|Y3pa*R6dὡJy^=d6-&Z'+^ͨg^Ο'%3z[Bd~@o{s *ۡE 0XlU*E "EÏfpeͯRk8i|eP%UE+0R`qʴp k* 9C<!mA`A`+T1s 9!)PaCeIiP ࢎ1 lEf'0rrrLUmBREW#_;v4*R276`-xM+?_.<X|Oe\(Ӹ[}o7_y]nG}X}]k4YՎ r1Տ Mme=+UWt\̬w?1|o*G(rlIԷ1Ʋ"1gIyjH&wu@s˥\+6/iC|Yh)ӂ/¸A R$LEfMN2鲄yN #~#%g[7u}`:atd`0V3-Y֔U~O !iϿl]q)4Ndfc}IFaT\ ÈSgt`9yʎ !5h*v[Si.ip,`,s$)0oemh4w#RO6q@#dZ eĈ^+C :@GO1.ɓR/'~j#KҟN;O>.~*uflyi9=ҡgwmj b; u{Nmr&A?yFHq}_Ό.DI ,rȗ{NlaA&ni048 1](+J"=0[*?xE4q1dTE6 A$(Fd0E3)( LU ;TsA-[V`UE H ˥CRż&_DݨVgB1Oc,NW/3N=}A7lW^BZ#~̈ZK9jAo X1,s0٨46ivmLmTkg)y^|_l2fwx&|q2>;on(G,^,͝'XM$B]=qwt Fn6+>eO0i)^DW>)+A{]׮28ZZ%-~X1rR YWOgeOFaoiTx46.̍WN~|S?~>W#L䇓ѫÉ9`V.IAnuѵko5Ul)sӯs>LZ\o JOqդ]dhY'$)<#@6lg\4QSϛTq ]B9Mj# Cmx7@WpOuke$bHRaŔP?je̒=g:D5RV8"·@ێ)hU9 OۿkeNU8C='pB)np09sH[G‰uX !19X%T*ϳEQhrfo;i7/\ƉxQwxWEOhKй#׼-FH^ 5(Xּە339HgdZy/kq78fvnq?zգ;wa9Լ\7,r`2G\`q "{@CywySyDTr\ DQ`tcUݞq/|D'{U Qd<)V@h] FрG!eLDguaoYpuHH 'l%Pm?k]3t s7u3S!έ#/&q'V{'.ݲ*qR[1vRma<hƎF{(VYE 62(I# X,%! j`S7rv;C~xVe]}M2E-hn+g,ƺ}ojџAk$ʣ(g(F2!q#Q9R)`WXK~Y d> B $ǂ1 XJn-"* HjQn~ >Vcc2w MBeݸ۫XnL:Ϊ_JY/TN3ωf< Ǒ#@0憦cc)q#bvDGA#0|;17InaN TZ1x)5ֺ聍pXǚLk^%Kq3"5SDcfNXŹ`GI4 w#w#T_9 sR'&2Ut"b&E$ȥX)gsN9"׆<,]C$84S+LY$l Fʴr;OQ:Y!:"O/ȓu&iݟp^=B-x8D&@ )T>;)CQ#{ogT7"mQJN[YL6Q,}}؜e5*Ω'Y&4smRq$\mmoߩ,6Lqͫ$fqYX"v72=܏mCiPih u NOWr:M$YSyfOڭ` ڑ>n)m5"dܦã)QЛƉqNK2Y7%&J{:l=^'$$sĤoL߆h_*_i;U `0~l.Vڷ>Nŷ|tYqQTgZ&/;7LC,pϢz8Yg綜 LEYHUӺ]u:++'Wa>|;wSs/8LYxzP0L9O>MS]A$~EɬW6ngux, ^_FgϞeye()Lɨr? 5b,𦘏'Ŭ^ wtR^$__ (zϗu}z#rQsWe^k7:"׬궻5m26uZH3P{Q&x =΂!Y~>.< 軼1#(%3MtP]n]{HuL>53/e9(w8sˍ7k<^$ouy\5uE1WL\ [(jՍڻ[jEuo5(V|dp-p]m jk^ޫKOw4u~m1 ULTU7Ro;ӯfC CqW㳧}ƵfgRD#F"@W )ZS6dftDY2T?Q?jz2 a5EgZ­6KcEHK3,2,C$/#!X:^/VgA7mm3fZMrG !(cgu2V5J9jbVv=-rUYe_zzIٓTٓsSZDdx.ՃN?6/K؆~MƜ,<Ɣן| =.ևQ/j@Y% g}U'&twm-|QQxm05B];z=nsxB5{([Ɔuc5a~Yc.k%e,s)<3_̊!t9k4ܘE ޼b{:`5fWb"u:+ߏӥV-;OWrB6i#уaIr rͭ, cccMAb1( [.R6-0ƂG[XH0 $ }!,UX-zgM7L@$ZFL&Z iY=Ül Ϯ:n~jLdfrMv[]YZ=akElXuInMMyݦSxȭ5|jܶ[ bIېT;ojhV9L;vm{u>3-oܾRο c߅rkytGmʺS6F咻\YuZW䘲SllСи!UG(B=ln|#2I}@4*WwDv *w+ Һ,*$FTEoSdgDvFGd-HE,S j2k /Cdspl ֨:}}HDWl;3ur3/L1n<ۈLt2Er"DhKw Tw&H 쀨]QQq!,lI${+%R97j8K&|li;KoKwx]m؟_T^Qy3Gxep:ӈɃ`Y4gs0fp<})ĕ~Yd>ZB))&=)1T&x#32x5sf"T9?^7,me UXXAy+}|8ɦpOtuUBn&r2_9b;-ad[B%E! MFnEu nVja;Ʀ E˰dE"B'_CWbBι`1X>M'#ӭqQF6Sau"Ԭ*\ ]:7e=nlѽl B1.2#30a%X\EhMsH "Q٨so.w~=bYNyhUpBas*Z[V}p.%w[5 8 Q*5"VDQ@)ya=qCcߤ>ie7N2TsWT!Ur-gn(a,ʷ;?jp"D5F H@;=媑ă!'BD kԷHG:D% NWݨ&pz( cOd"h7:R|wT hkퟻEt[$r@$uM %jC{*a!Rg-ךROr?CL$Ҭ01A8څ>i@"N6$?+LZ[xq8)C.c_i ggӧkgW]mt=?LؿnpMv ? -Q^B^ƞY"r֑Ђ.ᗆb.e K v?RFit2 0BC08C^n22A[VJ * Az np}Хw@VAd'np) S4V‘L<德K>}?1C0-vaR|q,b k3\JqFioEl4UTDW(2\EJ+Dg;JWeӋUtEvMOQHj`v̉<.]Վh: ]L芒j}jihIt+] ]eFBWT+D* 3i]]!`L1tb*tQJ 7 ()2`(G]eBBW]RCOW'HW`#%*w6uv-gtQJ8DW)D3ZyW;)ҕ" (G]eBWVΫS+u7  1 |sM]M9/.KpG9f._\͝s&9$]A!rtC+D)!!Tn8E`JDWX*2\)K+Dy3~+e3B)( #np5?.]I]R+]ўmz@At)/2\PUF+y*l8{:bT Xb*5BA*񒞮ޅ8Z]eBW<]etutMY]eG?w; B͍ )ҕ`JR]XCWЕ`t=2J.{:A2F +e9`[ [k*:ERD4~I a4{(;Y7ޯ4MøյJ <SP0 }o[-#FTdzLpX !Ajz{3L;yk^r HR gsSQʬoZyldpaFV'2J@fVVXMiEnGɇ KǑ, 'h%QtZGMSUF;'GN 0xAt9+gKz P ]eǺi7Zt۲0{yIة\wJܩŷ*'IUH]ArJR~Fԉ-A0~DPņ<_YTY/_8ԝ/ѿ_-/i՗%3\6&C\` w{K@/e6L_UNϯwq5UJV6gRJ]˜U58?<ӜƈB=C^;.d^&AriāX8D8orXx>7= ['?KvU!2f|p^W 82Q1xΒj={ Q#meV?mtDdx|vѾSb&R YONV?z7 ޏww7vz<xRFeEmys|>Y㑜gS_&[&D<9jlFE|e>{sUa* "?Fn'W͟&-Ir{e)|.*sT.-iJx|-]T-u7lb=D|ụ 5m}/4b pf} p1^ol:[_>[VVQ8\Y\ ~FK3No&H`3F?;[\)䌦I371Wsez1KsjpPŇdssQb 뒎ڼ[؄3sTe(`k۶o56n{ㆮ95-+QL, Q=ͻiMG+*pyCQK( !׆LY˵pAb"f' .I` uڭiS]xjdR.utfA"5t5\.n8\ٸ^Ɔ'a9p0x gb8 >\ >Ggmz4~5!2+nBD*P)#ו{argپTT;O??_WF"r֑.-FrX{Ied0Z9hWZPNl<^RT?ߎ]\IG=<ۺous*w,,ҭa 5ʢn %8Kp4!)R*DtR95^d Uo 빱1$SI)_Q9SRƵuo2t䗓.qBLOU^(~oWhWYC ;RQ&(]jqtᙅCҠgjQnxqC)n޺klEbk'v|)>zy$ Su9y?_7y,(O**0UN:@)RLI2eM//H^^^CT>BL(DD/w"rh@:Ea9~Uyb1J&ijb (w[ %QMnȺɖekp^q ,G7uzϏ*w5yfy5Y-UǓEY}.:oy2 6\ngiYouXkkt 7BuftՍBhs9k=9Cϧ0u]ޜ=VhJ4\u”ռn}c3"Fv<ސ [11k}~w0>]]ɓdNWUɋokvOoʯyw_/yCoiYp(YFQO(,H/bH {;{\lWHR9/Z)C^{-THYRtJ`U Jn)`Tי=3B2 ߔ ,>O2^sze?NW,Aej2D5J0,:IEӠ( ĸTPkl!{60Ba6T^3A%6`kֱE `Rpomm~<⧭XloH#Kf,8V;Nw !n6e3v:;-(\07kY;,Y{A<AqG XѸGRfLt>PJȊ(ǣlLю͇X) *CVdljvk)dJf@LYIu0s>פ~#P\203#oɈKF܆ulp FGu%);JrO΅pa}uAܸ:=8%S)ln MUe*[X[y7xͻyW/Wf 퀲Gv@ ~ Mu)D6=>=̡To3=޶hoĻibWgaTIJγ 3:hḰRcS:zQ\)MXRr[Ge2fn2e^Җ׎vw?K!/Dy )9X,+-d)5 [EeBRFÙ䅻=ž`2ddvڅ<㵠~qW6#>ll,SOk$J V&,T@p8~4a.e;~>VNwKͱNT8Xo *r/r:o8eGb{ ֚@$!csTTK@J:.sy${ɦ3^`}ݒcFge-ٛwn!ӏ~y'vS7_rd 5H8Rldb6C%m@S+ZrācU(@>ꠈhi@-x Q$ 9RJPdʾ&$3KdZu 9KsUHR8W4Jo^(RMAls!BbI\H,塽pݯruv0_1h潑7 AGAcP!NU`#4(9Vݛ;EQ@k) ,ؤ|Zk.gS1AkGTq\̿ {O횞Y&ib-8pzK7?:y 3#wwLхc&)am⮷T ATf.fWlY˖"KY9I:pk>Vuv AIR`+lΈz"dd'do_vy?R* 巿| R jC+"B:ݏ a|>oP.t꾁`J>w̧~آQ~auxѡ]KcXr+(޲pgl0箛GVlI&ޟ1OD/Gs>RElS")USbTK;X>D\-|k^ tz{ggtZ&-k|Xo!'6$xXeMH1odJ @Rh_9PHq%cnJ,)*$i(s_IHseGhAi9uqC˾&F\0 nWntLȳw`קot~MĨ3N Ia7iMSNL'pC5X$8Ԇ/3vK:.6a6 `hl.V/pg)h1f7SO:ztr~9]u}x:_~Z:gσl{WNߞ}\jnrwoFLzgCG9L2!7+~~et;]zlLĘ+!P hB"6&D%a,\J!qmцAEIP!2H z *)x-)ሑYS 6Zu9XpsB{v |wAB@ӻߕ }6 0 h.» b\xnjIg:hgg@Ѝ^XÕP~#8CQߝ~{ngo;_' _4kͪӬOV*иX[ƩeWLK=֣snOFf?x5C=GUMusEwkV_=bH:7\gOW֟iut?^Zz9{t]|:J??*#)W=Ro<˫k/?DGjA:iPJe*܄)v&]qW[su8'uNږNDa+ɹfX8h=B 6P)r60DcuҲiDw/+iv-3%L(`<̜Guhu~vrAy@[ -gt cHӲ࠲(S%얭/X`Qq sL xMdnAߺ˵YY4"KKIk95M?HQ?~C^muDX!ùH^Ri1lj!j1bŷ[F{բI8hO*9E `*zAn s }-16[ i)×;&ՋgZ@2XìX+&r&N L&N iJl6\}<i9X&ȣՌZrդ\.ƮΛ%5e*n J< {DpTZbՐ`L+La^?9e`ϡsBv~帩[-T@7#̸ 8-R}P2+(砀>@)PmJI q^h[euٌ>SDȝf"xW)f#&n59PC$d]hO/RPFNy?V3>r0\A'6J>Ym&A#MjCzu&78Ά8[@.GsSd֡42= Rzb pE@iIC/!02>|+{F5UӲAp K!Xj9[E2Zzih,Pkʗ' OZx 7SҤ1eedt mHhB&m7a1$q\si14'b 14EnN1*Zmg 5&c$F#u?&iq3"jUҹ苹07hpmozL.a͕9GmGRl% TcS>;m.1C>Pt|W/ɫt|%O t/$e $zɦ a)6ybK&+/;;%)j4dyB 8 bwzX_g2?? ,ll-b)X14FZ߮UcAeǶXί{$now:pmğo{}oᰁ=M!oyTj0)[z8di7[V&{=g}H.TShDXWs요QPKbLC(5P$U$(tﭥ.jPizM\~g)yggm9bV[ݜq Xba,= _p!xd* WZ3a,0%M11d Ny'4ۣP%=YIX)V[s 1Y0+6-Cg[_›׃Ҝ@7qrv7ھ\A=^3M QxSFP9kWCek+; f9%%#$#t_%1|?A1>챫uTFRnEaD }imRf kZ o!gոKm!uQTҒ-Žs.7(P>{ݿo[bhn`M~r?.:.ORnx|d w%ֻw_?bsyR >jYJ,V^Q)r-bqzj/]qЪ/c[^J/J8W Edjg͓ 'lݽ`~@9D-3ι`dV9!-T#OAA|&kƿ6z@Mx g{4qAܱ9ofY&Fcgv yφオ4`>{`LX 3%!ԆgՆJF;y=Õt!t BؔPAu{A^߈cì3+_]~&_gXbc$4ża^gy#g}~Ygaq مVYLu)0 XgSyH& ^JN>eWOi~ hQmOxHeDemb[0Jq) $b 2:˘7IPC>ՙ;-Zݠs½מּa(Oi%^je; GbATI's B ߃2{tZЇq 8ӬNblc]Xca;MwZxlz nhkot0ltIA*YI"1:'q(4t;o)){1xw|afXl X LAhô:kC.5e't4 pR@pk C;zz00#Oh94N0@r>Etd J%PFQvFiR׈\-٥\R* Vi꤄J 7NK8tvm&2b xlR s0~r r+'dLH@94y?&vBk{/* 8SB# 5- zEIkGOСi>/0Ӕԉ7\/1@CmUC 1jqRTi'K 8!C%֝VSn6K~xٝ́ϻih;ŧQ$9RX-yU^@IåR%x 9rrtZ:Nec+(:z.+ILQ:n>_Ud^h~>9ZaT0O膵RA N(\]ɩvYig0|2 }R\+SYNQ\yUb~V_8^Un#$7K0`ӳnەɑb4U37ZV0 Br1'!o)֜٬ ˛qOPX8GEEϮ+2\lʵIp=-H 4ુq.hj>/g^@&U/EL@7/^o/)3o^} #_Y)v C^Odw5ȚF֜ueu5$ ,q>Y@π` -W&XTG q#ԯN^^ @8%T<Ҭ0 /Jؑ#Mumb#>Um$ǒ/ύNyc9ۤ59{!xPUۼ?t~BmќcySSV&\Gθ ^tUsx@UO.U>Vf&PLs i(O)j+xV~r$ϟW^ٛ U68'(}{~1_#uumi_~R Y1RN(2W+2JBU;]+'4>O:;$1w"Ahy{ Wo@Fo@Eo@v5&`83MKB5ȈV=]!Juyte@$\m+DЪ Q]}=t%lz%0țD\fh% S^Е܀䁮mz*Et+[CW}+@8+&$UEt%m ]!Z@WOЂ֬5tpMk+@+ wB2x/BWvʶn{ J@WO䚷i[U{]!T+%5mѮ&{?Վ(Aztҩ% TM&lPn{l/*[8pl>W{4HMu")PB :d7(˛tiMs@M6rIX /8n- Y~{LX{:رZN09,Nޫ?{F!-˽fpVu Zc`B{p4[p5nUG%t,9m+Dk >}3y7BVttʖ-+"BƴQb QڃCWjͦW] Sj#>fp#fh} 3cPm@W@W6=ũ/"-thK 9Ma卦;J#k)edvRLS0DDVA4|t㗿~qWݰ n &(5/Cnꆝ|<]Nb%7m3ApM\/rgkg>qcL36J7Eu֥bZb_fLZjˠ%qUi ;⢥n n]-r(B"B>zoQ- p?a; a?\5_HyGc.nш7G9t#1z7ׅDo) pԉqt~]xggI69&L6qΤɉy%2@p9Wf.5$@;M igCUj< F1}\Nh Ҭd['9p%:"9QZ#[i7k˼I^SBEaI,D2IT.lڎʦq(N4)A,9FAX4yL<՞Jb2; k(#MwVR eoM)B+q4B$ʠLye,wVz=Shj M9/FR5RRW:KB)",@{ФdDKdkOhqwt$tfh̘ ͠cf>NPD sD^;$0 +^.Pd54B}ʮ6d\q&A:szau|!ACCŜfVPf%gĤa V[RDZ'@#+$s7}擡 zʂP@փ{`IT{aJ>a'Z]Rʮϛ(@5I> Ţ .e-%# h&[CR@rR?]ko\Gr+}"{~T䃁 6aTDʏ{jvd8o{2 q8sS܎=:>?%knus=i:wK9jci g16CFUyr ՐR@j%!BRXoݸT`1yZm$B ɨ|W!'c#ZZal%EtAe dGh]udV@N`Nڨ/ ~ E*UFG>`)FgfR|WZ}PR65 "(e*9<Ī7nJ|Y^,X]A8&_>[zT6`Ņ%4<2=K+5Rl{ XU\@R:nzdF) C *TD@5w˽AAQl#ZuA)ch'FWkVَp5TM`ܝ&EȤfC2#l};IcYbԨTߕziZ26HQc9MF2 VqA Av%4/r3ki@e ¶7ߺ Da BHP%D&T+Di?hx"W X)[K1 ,,xu0vTL@TCJB*)ԙH͇*Qp)y 6lC@WJo7XQ9S )βs. )#-K*%d@BY_TPcݶ8RQ>( E̾Q,F^$L'HH/>XԠ^8(9& Y,kOM2+V*{M/dPgF\0oxrC+ĥj*٠db̠JUYa8iB0`s#v> CcⅥ^o\svZA [ŨΎ#D m3bka$=Cw /%4dYm jmlAhqU +0똆'z$;u,J肸PV24ITiyM(SKÙ.K}t\%f0A 9ɐQgk#ԭHw 3xh GUSYߨFbΩN&dPJi`M0=Í?wyv]8rVL)V  XF;KrIn=h /Z@mDBMuj WH%tep7P@R@"4eP0F5Kq[S-Ah[Ry !zbA jhjwXb;ft 0UF/Q?hН%O@Gm6Ls$Ix2$?< B#jw7ygjތ~XT&UXPY|,5⬐ )ضYj-Ѫ4V Q[YdP55Ԁʬfz86RjvT*"ر,e"俣)40\G÷h5g jxaȵ6vٗ`z3@v?p$;7 PwM< pff=+[{ XQMΪ4^]fZkIQ3󬑲FhhQAiw(g-7\}ʌ4IMGAA'? 9k9lzBg #78w(=D; w[zP낂Ёdj*g':RCCwܬŰc6Ձ'ۉ+Ikj]Є:HrRQ'ycnUè#+pYTQc$T:9Q\J#x ?T=&XW!Xڨbj#cR=ڂC\76xk+FnQ_ CuԬUVm(f҃d &S@rZx tmKzz;G&&S\sA-4pV kJVAPt֚&-m(-\j䋑Q+u!]?-Д nF{k|TZ{m(=mGm(u4qCOۆ JGr\UmKcb`XKEv,*f-$?MH 8ԅ;)Z-, \g'^]wԢ`|J 0H5zF.^ruusp7,f `ͅKVF_Q; ƾ:B jҺꊗ1jpo}/.o%Uo@m6i|q;6A0rͯ/͗濮opon^_zMxG%W|mWooꚯtsq{68v_n>½=~)_zyaG[ 3yn}b/:}F@tl Un';('I: '8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qt@' 39|P; ēz'ВY(wU:!'h3 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'5?(mp[gȾtIVO}kGb{j|/۱.1r,mMF[x2 ū͛z ʾ\(\5L|-wW[tee^{ktsTwUwC:N>3_o_7SXj }!ԒGRpF;^F ~](7XHM;/ݯ| 'D}pHV,5˫2d=E)NDWie4tVYڸvJ UrL˭lm k+* ] ]LWի/:E *yEku63Iܺt}}bgӽc.o룋~ .L}񄛱Jt3뫋~#_+DKw7؎A[:04$Ze2TJyj‹?mYd>7/Ɠ"D"#!zħ-]XW2 ?L =>'}S NRЫ,dU[RpMv=qOiO.vJn{4{<-WrNl]1j4tp Z=RG1~"b~ dNW@iU/Utgޒ~{<Bɏ^GW+s* ]}lmi>BOWR$ |`{<,tV4N&g*? 0BW@kY;]1J텮N( FPNCW W,thZ;]1JRBW'HW^ifRWLj+k+FVOWG\|`cib4T;bA$J+Y\Qfot:{5ECCۆ9,~d{qǨ-5eU& <2 Nk 2h 'ބOp4 6(Z;]1J-KJҨpV <0;CPƕ=7@WIc^먕pi psxpZk+F ҕ1y+7 ]1\f+FKvt()ҕ5. G]1瞻:Wft(NDtE&80zb ] ]9k> ;6EW 7MS 3VJW ]"]y{G]1\g+F (VBW'HWA]hAXxgKcMH>-<ngƽd^= /if9%D'_ccnb+?C0ęsl5cZx۸_h[I|?fl[Mz&08$'V(z8lKVhµ93G琇;]!JZ:AB%DWmNdCWt0*tU83X푎aRk2#UnKWz*I NQBG>"ZNWRNd6W6ƺBZ՝-] ]qYGW\JB "\ΚBWVѺ4+!TAtkupl ]!ZU{gQ찪>]IE/hX5ȺB4'WDYZE_W YB1ȈV-]}9tEz%Z[ <:]U+<\ 諡ۚ@Wv= `cCWBT՝%-] ]1m +lUc  Ѳ[Wtut 57e1tpl ]!ZINWRNn3(nuh;]!J%[:ANDWX5gjBtut,1It}TE4u+DiTKW'HW!*/-|ӵRBعJ)IU!7 WwqC9dU^MT&Zx9VN6奦'UCHz%xr7ƓCݓC'?EOXl]!ࣟkR a'GORQ_]=u ^֫W#xP -?Ҳ^5>ԂXb-]=)1+B5u+@hMi]`*CWWյ+Di[, +CWGWC+jOWr㎮NeT7i B>[ZNNWJ ҕ^XBR6M+D[gQ-$BWX1ݤ+)m ]!\֘+D׉OWRȖNkj28jr|B!!gsmp͸RUly˙>Km4x79[rCs+opg]Q3,w"0C\ ƞ0lvQDah|F_͒qxv PZ?G?KTw2LbxLơ$g:L*R i9`io%]9d!8PJɚ- EZqPrSBٯÂW:lM7y^rdžu_oMumPP)mmI>b?MT'Dm6WF*nԻ6SO-vJ"VZJ m$e)^D P:E aI ҜIKOhYeݎmЕ4ʺ{muhElW{v=%=5U5ǾZ%CWPU+o]O6R+{#~+*!S+4_Tm/> 1$c7X.;]I{ŲGeLH~\.;pcg `Dp. TJ/MJ)UGca4l.cTDLӈa0{EnyѽBZ|jw|6uCIko^tbߞ=s3ɏn(f#CD ~̯ ᨭ&q+F\d>0] et.jI柝rEl/XfׯE|?<6t2Ҏ *븐NƝQb/׷yj)~/qx1ӈE>.MV@n)Oſ0vdWp[=ybDxDf^ʴ8pH *&XPQ0za8ۅ`_MU6~2^` ʹ[Kl B2OѠXYަNNλ%*wsw0k`㦱U/Ε(&1X,%.Fpz:L ULZMJ4 qFm l?K ٸɦ'K@ IӒK^-whv:aѓ=2* \fv /\s~E$߁!ZdC*Q~n}3"Ղ0Ep.ȣ"/ z jZ r,P o:$tppFNgO"o3gZE0%ϼ`8 3mp"3VHo4":PD Bj`-UB@p pL$[e00#С>i 2.$Wd1McPV"(JQ$Z32wQKap[A F # 7@u^|HUVǰ[G02&(Š,FL98\9˅FdLxr򘮱,'Er>5t,f.W/,y]sv1|{7ȧ{W=ů6nw>i2~)/&FVl$f`K'ATCkbI: "^7Iy~q>Hs]j{(tj9xe7mglHt*g:趘HJD.],cp54/6Q߇7)w!E=Gx]A" w˿,ffRQVl8ނ"ߩ]ɩvUI|7qbҌ*!Fo'ɕŸկO泋EMl !˓" 0` o/ʶvݶ#(¯#)?W(Zh(oY47bˣCUES9dw{W}Z%\(o³|4!M7&Qs.q j~Y~t-2Z\xdP,ϊ7=i>Z2vjŽMkD?ىH4 bDT!,QedY*Dʢ Rؘxd2!FOb')?upXԲ.UKR]o=,m^XqMp8L: P32|JBMJ+2&tS/ys5iǏ"hVg8Lܘg {2lOF48 lY)5s vƂGʧy8%/UeQZP/0_ jVG _#7,cՒ!_,D'}L/RwU;pfuwU<(+潎\Yz~ .ijq0>/Ͼd͆ <|7 #8>-v_z8԰hB A0a}Fa;hmvs/yPww<>K_&!Nz[|*GsoLʞA"{FmlX7dsVI^Se.O~)Xov2eAAiVi&S9ZZFI Q!6p8OT#,Jr Nc +rX\Wq#~X˧1U ?&>x(H+re3DzB,EeEwkt/LG;ǨޥrKjJw+}y;6Ȣ#ZJuLN knb6ާJQ\-*(***q!̊h .0CY Qr2(`l!iAȠBFߘ=t!7j!33NM{S[2ʬbN <1ex`pwƓurE${jnvVЍ.}|g1}<eqw4Gw/at-pF:3q(#\zi5Ǩ$NyM=IX @-<3 ,5*ZJȅ34 TkƤW܁Rk|\Yuc0=,۵޻zXy5^2XsOQ8gy$R%=jوՀ? ,jjGAZdpvR J3q'G KYG?O6 7M"DH,4 -e҃ld)DU7 dtK6)x|~zǠ7oؙ{9o}ӬOcD́1L*1WJw(x^ZǼQgSiJEPbCK G ptGZh:MaN.M}، I AjC:f$0KmК3 əޥTT`Yu0JC2vozs83%/;9Doܟ jnD(JvgtmJA/x"NZIx7W;q1cM2*p%P#\TxdH.%O>]zvRO<_2q(4\T |"F0OL<a4םܿR94fBy()UpŤ'  ,4˴if!NH,WQ6r#"nK`fMvn`գX<$~,zLٲ?馨j6TV}Uf׋jfP2MG)E-. 6! K%@.$ 5bWU>`|'{ocT8#| p.tJ}1r>9'p4m& (qV z Aʕ'zc{;_߂4YeˬO:^C=SdR񡸻?z}?rb:oPlTO51=;rE2[^.פ)o5_tp6ω~iHL 9-3dzmޜ( e:,';؜Ӵٕ5~_q}ROOpc9M.NqHO on+ϖoKo 6K >R<6xG۟O33* Z'4gg7n4970  9 s _s>ż}uN~Qd4 Oqv~xyY^wQu"N 89?|z{foXd5ekBc?I<{|Ez*((hRJB34$88%ޡ gYI=Vӷ%[sP Xs_C~_>Kz5ԂŤn&!b-]ƷkU{QMVY^bYlm꬇Dž'ߢF)N3 CkC:Bo ~̥y=#rEVZifLW[.wq*WԶ200ST?X0]&n5TZo [pO:@mCPj齨3lv[myo6DO.=7|osf (:uڑqy%X;\nvP2 <1fFl9O' x"\R[.I'*f:T+@ ӚJ ;1 gl:6ŀ9aD"gOQ7-67LoV"SR. ^qd=C)T?'uq/G`71Yuhhe^|S|MS"xvu¡gW la|s=O=MZ8֧I\;p]ԒI˻oލ]z˾#"u7gM7jmBb׈(袭Zmӻ16'fݦ Ɔz?4Ok˚;?/^s\]O| z}T\7ǍökZ,u'=ϣBdZPU@"h}>,(O**0UN:)$K% jNQ$9^dhmm閘&ijb (w[ J@mrpk1YPL8= ŦƞMKr!gޣ*C?麶"hqwLkz:9N|:cSi3m4ZWpPUP:TqS7U+&BMGЫѧ;r>IϱF6{4hh?ainfl95Aafűf&ꡱpך4?ŬA6>ǧ?m?|l7DEZ\Zl#{|ZV{䞷N~`пVC8׳;_;k7M:}5D rط\s(o'9vG}QtYʝ;tȿ1g[K}kR.`Ǯ!Js'F2~)~)();6VdܺjZX;H[vM%`Zn9wSgyÜG|ۭ-J6q$p`>So(ǟ7ݵ|]@>A$96P]u/tx!@] N+prA MJ7P PC꜄_رe~OdgKfBۙ0/dhɔ,n´9|zrFդ&{)C lsP W| iYǹ2D&΅:>bh=,«?RV,IMQ'~]nVņ֢PhJI0p6 uAbO` 1-è۲iӚz,n2C J )N&.$:hW%]:&8] s%nbIMqrTb9bA;4 ?gy7h;[J@.k8 4>rB V.)q Կ"_UhWuR\hX&Ȟ,e˗P!wfAlptqqbpw̭|)+V&p$G*M Z^9l%)a Fis%^}Iι'?Y,B $[DoZ1r#J .=2Ie2$VNXUb$Gdjp/\y8K빲m>*qʙ2yx6=\%^ U ͽS P%wmvцoפPXn|5=j0ֲYao;зC⾿%R *Č8~ p'AD*ιYi6O2n衸{eLŧɖ3hpz H\kJ= 1PK_$с]蓖TBdih:G<\r."_MH!nʃPc(w1(yü!z4r.uy7 /ji8(  &#ZrK7mQnBj K;pXWS`8kk)0jv+05*]p! W,@/#ʱ|eȇ<ځ+!.f"X'Qm$"*N;#W0J8 pDb@lHIeT TqZ67Yb;Iv(LOr8gi!jl+\9H*q2ۜZ$\#f,Aͱӎ^H;RŽ6J hVj6J(I&(("P%yVtJ i4s)CLzzYat44{]<`+]J;T**ӰZ:3xj]64f3%=yL5yNH3.OEcXi89 VQhcY"1 kp=Eͱj7C 0牃<ҥU!lCܵY3;-B+>'Q}YnkpdB]帕Uҹ昊,-;A+b})iRkш2Dmh"p۽+;`pm)&I\Ĩ<jQVD}]RqGfӯ7fJلYqH.z!Q,k5F 2_ };/"nXu.!; eTh=Kn0GQ uSmBj/E1F7D+A2DU/fVO In{zW#ǐdM0EX[@MjP(Owjok 8vW.rܐHU@lL&JHa>Qt^/p0y}CJ7tNDlc&~ȔsKe_:;by $gS(,"r5M:[TLplئ Rڥ?tst3B,gJ{&8@g>2SF٧EE丏DX+1ڵ^^:ۆbT*IHyZX~ka ,;Xw"3oJBS #i*J%6Bx1jL.b&g_Ϸ/A  :)9lW ^sL™x-eJP7r(԰˻t.ڡڥǷ;4BlK?G!T梲 xE/_PZTSeRkWT:1TLk-*޾up/E]UjuuUTvPWߠ2Zz4/H]UQWF+!KQWZ*JTW$N,'JU˒ẽ%3|#d#v`׆;@i~{tx5lO&;@Qrۏq7Pѱ{3 ~*瓼 t^tB=R|̸S>.nc2c0?^ ОLac@ QyjR:6wSmyLf)_*豘'?~nX8|6jrRNGĒP(3G^O˽W4͚eU4h}R*i}ȥEBI"GnX>V^e]`i1 F=)-cB-G(&)NG8()s[\`-*$Voȩ}OvwBD7S {KלKk*t2R VLl1XA :}xB6eb8M^wƚ~% 7 bY8[H ~,]|3*w,;Ŷ}=/0ʴ$c:i1b`<䝭 ( QJEswhNw_y\?#]Iom?ɓCVPSQ&D)X @cF;? v=J)vx"Z hͻ҂[M`}fAWr(-_Ke)bW*?ё@S7_cZo}r饻eiH?O4oXYvWklAT8FqGZKȽSwdR^$?Ohտ \UXF1&jut3 ͟x"GpyHuX9&Kz19 \;ԜNto4uE[lBˏZ (bz\izhg9Ǽ}ch6Wƴ|Y^CPE݃^]f7Z51'hRƣ*1#d:*ѫ0YO&ߡzvcOodS7rs77vsY^gL;3Xgjg}NF^ 7i(^g5ڝV2R;>ukV ʹ4U]4E8(-&M3;?_ꇟx?>_럿/<sY6'G #^rm_]s-VQ/'|~]CbKQZ)hf .v W$ b~H&:~* p$w*ɷb#Cwԝvoӝk=<#ϭU4q/ Nm ) 1c&,c񮔈%SF-G2l'FΆvU(*XZI ԪYKIQYc"?*&3^tv*ko]&+=>YNC$\,2k>іpnV&W~O+ԫ!VA;brЌ0YeT4B6@cQ١`l 7\AM8csu}]ӥh4=NGjw|8I3Lk߮ Uwo~6MM&0M&FG^`πH~ChӰ^՛]uhmޓ|iRU?Ą{Fo >o<%f\%`;-md~zXctז F^]JH]J~׶RiL&][c?QIHNeSxًo;wsMn+G39< `d(DvkY{aȵ%h1dh#+'a;#ّǠ+Oe9nvF9IdA$)EB zg<8-dg nQ"(TBiIX vSݦK= tc<7c knOߍ:wQOm7RjeٗédZ9lTg`;!J=29Mib4“hK6àA1iCu!1Y, 5NS #8_5xTR,4!9"2 ೵JjN%.d !]7r(s4sk}&gv 1_Uw7S}3E]vpˠ&7g򾭷*Ff_ F5hj^:J2F4 yW{P2NZ# 3=AlT=4 SiVф6Qz5zVh,Ӌp=qk Lq?xRT$S4i2R'Q!{6ˡ>QC$2ř0H}oR 5wF;a -6P]D mٍuAdOx%^xMvVYϿr5r0 E(x(6b J{X?O0A?@?$ EotzI(PO曪A'hL%/Ϲ_QMZޘlXG)S[;:i:ijY&\]\3uEV^Jp9`/:ܾ{N[OKs|Y=h)00oB:E\P]:UNILJi_^bpvu5W:j/4k8֠Φ!geYg?/WV7wKl8}{;.>/OTZbW`x 27FY%W,\yJ,U Gi<2EYƔIzW 9jtcj̫ j51?BE\xgVrJBxRG4yԆI xv>cTꃿ}ox[!{:+/iЩc DgO({ZUAGb~*\ 7~pDi c9q.eJ{ A7 $s (Y묭UƬe,pϭ1&dὦB+ FٍnB77,Pm ϚRa~{ЗyIjݕ >zDVՈNt7_y"O7^y$)sY l/"/Uav3\n>F d R4djxsdL$ke1tJ0 )E| {hu#w; ʂ셪 s}j'Mzے\cSarǫ2gy&C dfӓ4,v阾I#{B?);~Ii.[G9zuq>u1Ŕin6@Ť :U [ >!烤Xd3BOf`>-]`MS!Xe^luW /-~M}*m|q]_ۼoXprG^\6o.htl[pv+yIpw) I(卯7un+]czC;H妥ܼlaE7qAg-ojgWOxCW)IM%VѺ"]s뵛Ph$ xۙ [t+Ǩ8ws;vۦΕנ.2r>Л88{<ٷuIGw\x^v`]oܾT/޷Cwo֏7>";^'En trnn]B1ʷ@ Q42АKr!=a{ t$:(4Kt 3&z>0ʜN֋SFͥI|NޙChChL1a?O>vTBk1`4.2`gGetsс$Z1fBMȴ*lC-y+&' =--H$c+/ךPyVgBM祪A3%XN@'ϝ]=lX/lgUiUޝ'h\vg %y2d'p#t PbαT#T(s{fdi `b.`:\>:E8%LFe&nXT4c_[h*BƒnXc/ 2ސWӝgzq.G͓ hh40.p" A0"F{eE10YJrydGvv]Zs@E(xE J^aFɷcf6U9̂6ϵmdW\cմc_m+[m[nx;Ť2gQƘmPJaw@F#O:+w=RjFr$aiK$M&&HElp>f7S̹8a/a<XM?Zz[M кvcf[f2r] hxLZP8G.^FK[^\P75ΘHqf9.}06!Q 4 |8-b[uysbմd_+E.nL`Yp+R[$e*TB-JRho!Zڱ=Uw?_q g?>Qj%̒8瘟L`%.Zo(,izZ<ݷX^K9K.Z|@]g-!Β-i/|9 )t&2>$ҹu+t>&pncMrZHTsKQaݡӯOg~h jqvat<[$'onХ^\u= Nx3a%倴˳[z1'<]ŧbRáڋd`}~zV"wV7 s̍P,;YڇaiK=rE'iJ /v7SBCVϰ0/k*D FL@Ɣ-̅-mEr7+x_OJ%遳Q~3o0w.mȤ+| jq{@%&0Zl2p,y\)JXC[gGAMGE{^>D(WPdӉs0  n0$gR]R8eDd+sH15ۘ=t=euzfNmprJ<` Yx@ T)!IK4H\優p2rdFHi'.+K4q؀ޑ926:˛hYʱ8AܿDJ$H2$hbWӰ\ -EVʩpR B,sZp"K maZ ppuJo V;Yec<,cJy9 %&dgS;8@'>fb9Œ!Ü%!$$*:kNxL9ĬFdVayzeXX-[4   $6!GKR3v, `_W` M[b2<Y-ٓt ͯ38l~c[3}`sV0skڣ Ṕ%B1$K53dek.B+=(8;_L `6To1䉧Vm*[ h21me%8Y-oNNыK| >׃=x|-AxFk!N: >YA{)"i" K ⭲xKX7T|Yxmjczg`4e,8tca&RW=ZQ{^5tV55Yc}97P\7i}h%^\゙7qy`裱&/HQ6*f"_QJH1 l@nbE9L9<16QlmOѤqMMʆvi[+Hgxa!Zd3^>; _whZ9w^JX{n;F{jl{DHK(Vt'4x ?{h< 12Adɸ@D#.zՇ6,A'@FJ["׀9FWjσ.A Ţjmݲ9O,sR K֒9Ǫ5r^n>~04~x/ TUXqs# 5gIP!brz1lԒoHA`Zb m7qI_FHӥ4mAU>vnYe.~=(ӆTO9&+/ ȍSЏRPNe<@Y"w6Fܨ6Fɳ=:P$~lIv/Y$BQV&{?wKJQ2L@$/qCO,NRGOlh '6nmVk?\!Yx ό>c{!1{itDvcwAO- mc MISJvhd%E&&K@{j"׵+rE N/LT@#̸7Rg2V(K#Ӑ O,\*YJɘ9@ӞP3VNWR1F"2r \ߦ!ɳ:fcϬuF TDq,rޗ XD*`=ߏ٦, $ɜAd*SʠcRpnjF?H[wrr/t|QNeXI47MS2ސHm@l//Dr|Y KZ0P&t,Yc΋ec`)y2J&>w2qْC(@$"dDeJd5*Zf]Fg!"ٔ9\֥ȲR$18 @Ask YxШ]'>錜m-ԧd?S;zI9]N>uٿP77JxZ|0Z%X(sP3r+}Bqpu1"zs{+/Ql:[ :mvVz3^E)zv"Xʗj0rlVQkd{_YY`jGϱ$$ElQW%E]j껺*TJ=/Q]Z&E܄CTsG%l]]ƥ|;ЁEWMœB3 :E׎S? T?Nwټ?_mͽjwN[!+!k -ϐ|V'kLы輓mNA.;MDc K 6!eA08r6IZMz-2Q P> qN`uk^y[g NfpzKKv:Pd%IA҄NFIz!&霹,8*i2ǡLr(7^}O  X=*rq/e.7˶[~𪾌WUw.\8'Iu=t1 99QL! 8XȤ{=Spyp@^  EIIY"w 7$g3C}.#13雰C7^R>{p9_?ltdkjvyVF!runsPV>|%.i5=7!Jj9/e8X :3F$]VA A@A$Ͳ$(tbƄvs) tǁ|L~5m7W2eo;5Y~>;<Q G`<2̑?qdn#';c8"t0hg{P _b XNwL@Pʼ҅>4aF:C#76sPKZ9¹,1:ey+ۨ,yʕ \{ Ev]:CgpG?h5NKFU[>KB ]KEb7E)f=~hC]-1-߸qȜ+E5( Nkʞˤ\B#ȕl:iU=yks!H%=L6Q }6M{'@?DJQ73hN4O.L ߟ ^T'5ǫz2`ɀ(T`0#%zK)1+1tw&, o&hCGz(ڲ e@dGx %^x˒fxIU'`^`ܚX GUq4VTVOK/YF?G$d9~)+qΦjW'ZSx_ӎrVoъpRrzܤ]]ﴽxٴZfC<ݍaHsh"(Uybm}Qç j6/ʌU9j677ҢrXrOT?|uEbrˠAI'aRC'Տ upZҴ|Or%GH&}7sn⦁(х~[Gr_gs{;)=llZ*+U}Tyɺ_ufa6)#VVaN C!4dJma;޿rXoxV1+2 $ސO8K)L {U43xdej]T&}Jʅqau+>Y}P*?' _Rӵ?O=Gͬ&fc50̵Y~9Os)GG٪lU>38j9a(`Kh=u rlc }+b̙>"`ӌjA(JeiRB(A8T #a%isQ[e:s౿+m#,35[ˍ%m:W]35HƵE"\dF$2 ͠Iwf} m4ӅzR.~纠&냰ppP"UFsI"#AH(U xi * C _ RP}ya=,@vaT~2fSA ,'STّ$8 2v(_~,FAbҞ] %e,±`QCp%W$&`TD#wFB/an1ο,:E4dΤ$W0@WOEcR.'/v w21w!}=VW[JF0)cZ0nY[ϧ*MZ¼i4'M4-1/"RV%ib'y]aIT7i]ظ?͍\o.| ԺfC̅m<͓zm^H5Mבӳojq'ikI֑@=E0}2DTNg~,Xvtq1վ:`GNmԶ`],#YB/p2V_ol960,HN V, )ɑzDY%)徶#/rIl8_[0}8%3bl#%rjv3@0Ѻ==&1?&Nh.q-|J"5*V҈zd9`h/@x*%_/nĽl$ݚUH^`hC !g:;s]R03:B1!Dvmv6T^J]t8x.I-:o-qg&9rhfrIFdө٩$jMZRpo+'/ثso·}Ha˹}3mqP^A-C0Ʃ|c`2A9al:(<{E[^ɛLXGu_߿.=ƸJ?}ͨ5|&nmFY2LI.Z4DY  Y*r. Mgi홽yԞ /PH+\@zzhM:=tѤZ~ln^|3}sU6ati\OMϺ.^şNn۴n0Psz6N:n0x[κA&7cMD bH<)Nv >]øc6/F eH)C }NQ/H{%g3_o0osm{b y^ZydcsI3d) lXjU7u47}p?uv%%ࠤ6ppJ|z6d0X<${-H^,ݷpa$Th)v_?Ae|Wu@X]/qt4\"?Jn+l+ ebdZҐU %%7ߟۏ+@Mo(-ccO H J12Rmiڷiz}?vZԏy' 0.9a*iMC=J2I ,X2rmsvQ\5x "u(yL,+L6e:C* yZЏ00k:x@YлFJ--e'Mqn?|>>H|q fϧVɟ' LdMpTsx-KPJ5i/mYP{D473mIrQ#2SuLJDU CL )J'"\}ITBWȩP{0 kiAit**8ZF6 7p:J( Ǹ*q41qxqtrY2I -ϣCZt x2eqj*}6V 1'2M265iqɂץa)B[nǣ:7XBe+0okI%Ģ(&s`R\S+{k ~#t%r_p;څ29to:x%pFEKߧSr5Z'% cLL#;;]i%VקL8;S_952EOm:nTsZ^\1CXY%oʺ㥯WrL5"o%YkN|1kV7/#׈ XlO 3)G7Gmҷeo{\fWr0nl7SϓB[ZN]qn;yVW9kv]KǗ ,޶s7yw{6v{&peOOGѥNzR#eOhT!4d2F1٧}8;g]Pu8I3sŠ6-CAir +K3WCmf;n7ٗ]@&ّ<:6]Ŏ2C|bVL.݅9Te)vbm4cn'3NF\dQe+o_ <"#$kR0XYPZPm>jKq| k>"۰-v\+oWLd}Vg(<dq$G:*L9BmMN)ؔ%z0hI@* `\KϹII֌Yajg me]h{]U]9=+xIsnГI<=,?:}8:~;:M:[`EJBIT"O\H$%}IZ. `2)MM4$#\2l;f]3N*EjFt];ڮvmpGp:sFr 4Ƽ4NЎG-Zgd)^ՇYi` 1J4k 5G,%)j4ו9ak Ÿ+Ո׈0@$Cf:8f3p"H!SƇ(11#Γ5JJW"-;s3 m $wY. \LH!ZMHD xFz!kFzֽCԁdKj\^"boL`s9r'R8 Vg-ӥF.sC1~pzqzqga-XWp[Paӻ?0x[\HُR+=8+g?ξ1<GQ!QKL;Dgh1ʥSfJ٧=n'Qm5>x; .R+8؜G^lr0?xE ma_fmH3nW G:V"Ȭ/,iU߯ ۵.eNyiIiҔ >C9SP$hMFL@rBkAsJ|0 Qࢷ߬I>svq~,lh{L&L)/t=75edOM45[lGVܔ&MkѷCNhYnufr=ZbU!%RZadRR$*w+@3.IN*3.ZrJL*!Y1#-CAKXtNp+`-xZX$11)'c'eL`3Y*2 MFM ֮IF}b{a%aeޫ)Qy%IJɆ0DŽ gE)L td 6ВHN<f y%QI2ʑ A6$ v1'Kʔ V,YJU_5d7mStg8j1ȚYWYpyCPsgz#9&t R4DhN'G5:d!4YhHMVxUw{g| ;^ajSz'0 O>xq̘O*%Ki RiEGuH`}kҜ!6h|TC 8k(=htA;-HLv9(/󠃰ŀ5Ȁ hBK2B9alK-<{E[^ɛL6ǣf;u_H߿. Ƹ|_~b0scoF<%h)Le.V5z ?13 muӆ^ iƀՎ& h+.?sv/yZyiߥטV7͟U=Yt:`f{]7sz6NfCW/ݢ6剺eT ROč d^F#~-F/i˸ @ܿ!b`ޞ[Jyk7AC'b϶IJˍfh#OVߙJAS6Bg0q#AZ 2@@ZF4K6XgXvug+l+ܽrvF-Sf3eZR̟5_&8}8Lڕ9] F.]{cq9L/iM*cTq!@0|P ,-їU^OZMPMrID#萃,l:Pz_]ZIE C4FiWo1&KKڢ 1'I"&0AAhW#HRRBa&ʗ@+e*TPzyʢx6Ƚʁ`y0 8ȝEFU.D,^rIns#Et !8w9༲%LsJ@K6ZiL3ϲF##P&2',ԑhVXZQ2de9F嬕_:z{]ٻ8rWE޲m*i @} }?maeKWPȣ]LzQXuBj,DE^ByV4B6=#4t 9C?xzo8d;IIr3&'Y`Eel poh4*fZd:i!-9sAE(z-cm3@FRc2vU;#H8f!v`ZvP3$ A-ѰzVWR > rudF|+"Mcs/&pz["O >96g= <(;HU)kFȒlKɚ&X!w֥VMn6yjt/jH%nKKM//+!1vl6$Mل5`+ثt1إO|AͰ *A~!SNNkd쿕oV[foF=>V[K-)HZ !oMLUvW;WrV=DNRq1jHbg gb(2B6gV~t68߳!}9/|g] z/`w{~> gO vvsE 翾ws5znp>Tiq䅱F~K˷o?.6wW\?+-bV'}f|xlFz*9S&%?P٪j';8H{YDv^s%jUN9m!5RR+9i<|kycMcQc>B&N0RdzmP93xUK3ܯ;7GН{a39_h}aGhshzA𨏎 뭢N:a^v?JC~LWxBj=b]+ZߺR""~汙9͢rx}oy(^=sBݬo<6$rS!ҖmFGCK&j SѶTRF7 jmy1ֆ jQT+8[B=n4:c:[4Vcck?u̥ _X Zb陴=w]̙K9w zg%DWшY ] #сdJ+]@cUwIjVp[ ]m 9t(|vCWJNilUD݊u1oU^#D{=an*qGl_/^^_ZaMT=2&9yW$R.~zs,ݎ@͕]5rQp;t\st}G?`,]w7M_|?~8iuu톳ón^ކPz޹jtv]t@ikNNNKLdq9!.cmg0t2]\owV,s{^,$kxWJg_wΎ-~pnbһ^nwa ;=WMq/nz4G'f6ͷo!dwlrEe?|[@Eq"ˆ`qxBxuGg93=I}8~ns~΁ ]^'~e4ķ]|cKC%9g%F>sdLnJt1o~d3Wʍy{{|wi@}& uz> \NDŘH-jrLlicTbPmHW|:xc; c=ԇKD#ĥ{^b1f/` P b}{n@#M'tc<;I R![axRu 99s,9OT kjLJN4;{[SILC*)*ftރzT>#R sa$MѷHIZBB##[$[[w ҋIF?*G2'Xs!(@!pkU!j4wE<8lV˺R\`S5*![ӍoFS.x;AYzxk ]`Y-KE ȎޞZKԑ#1Y2P4Z 6`9fg0ٽZؠA[TBh;^v޵݊#hJV**>:L `tp E_.+:+ 2 ;c57k0up.`(Ҽ> k9q bBE(ڑkj Qz RAa')]3]-<.@60 ddsYB TPk@ou4-U2T7 j,8MH&;ԞWѻ+dWtB2r3J e2aݙo%QҜ`ePgBEKViLsR)GPͨ[s ƒ 1 ̄1!6Ks0k)q$8TRp 3kBhv*9oL.v/ŏbUڛk((Sa,@Q"Ł.Hq;+(jY6 Rl erFAUu3R ;KWcܳ.RS+էqY`[ьm rRQ8p;'Y,묏 xjm" = I U=nPVKSM+f#cEMMDBQKzÀ/Le;ƲΎavS ު|-zDt mGH-A7 /,*}tCֱ*$_{HV*#d`>bE\ XF1]Dˠm@xyT(#B.|2a4 AH !)Ch'%Ep[RCڒرC@?yHz mC( еvuDO3RF@v[o4Zr 3"` Ȯhm/AyRH Ӫ((m}I!?GxP:j@#Ȉ'!;?"OAeEsнF;k$ctκBB;t"FLyI,ng1MVYF J[(u]M<֩k`bZ&I!axkD`!Qo}!>:|Vs/:S<1D]wmMٷԃ#[leyo@|Ѭi~\ϙI&ҳcu B75hfgeK u a)ΪnRSۚ6CHγF5 wiz);_oaFI< %x9򡂸<#HxB`D/WrOxmAOt0)n:T3% 3!R#mgv *`b2<}y:"u: ݭ]kB PX)o"a`ùbHlx^HO[Ĕ7D{s|dh8I?iiplF0 8K^v=nqt577ޢ=.2 #'mm`U;piL,r,mAتZDz`х>uוK5 ϮXw>m+e?!TCcR/^尸[.Rڰu6&s675v+9뼳Pa˟~bK웾 q䤥 Wl_^sq]l_-4~an>U5/E z@,Z@Og*8 d5{g8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@_ȶU9,g='.ze>N d^@" $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:^'okrApr8Fk L  tN A'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q(rLMN N'PSmqA>! @PsΒ8 (8%N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8>\ZԿ~P0?N~䥦\~wq͂@KwͺYY\_a:d\`1. Uq jM޸Q0: {UU;ݪuϖkomՂK4x}ٝFmY̐gGE//n;—yY6]..i?.ܝks|{Yg-; ? 7i _h{yH zvm!S MIo1"iWJ# ;ډQ~XXO?A [Cg_7fs Q(3|)9Tgӏ_5Z6E!o~ ꘙ s#Xדwr}CTч,7ZV[HpqR2<W,؛jprW6M~^(UgՇS\:8F Nqrqjc8 ƨ JOlT&U\=FJ 2X X|xk:hL\9S^l@_A3&X]Ӵ LjX|&*آ"\EFt-bU& UʢU'_ 8P=XnP jX/WUOT:FW>>Nn&2!0 *\#l^płC=M\Am0jb6W.15EWl~8jکU:+:B\Q?"\QLJ3rm5cW6q '|2aߍ 8.Ղ+V;vVq4PyR sFϓUdI{Z"9iFr0IjSTI%rhؘΙ3FL9C$٘|kQDN4ǂS=zKFגɱZɱ%;L.qr6&T+kC-b4i=VKUzNdq:8HF=yR#UiMYu\m>+zHk pł k?k(\Z㧎+VWLj+ T EW TW6NWPk\,TS2I!kS X-U(:B\9m W,8ԃ+T-h'+VqE:\AE2Snb l=i%WKjCz*ycMo0NaBSu\`+Vq9/vE,U]" \{V>U:|28Jp:6L#xj@S9EzU;Xxԡͻ#:W@ƍT9WW pN]\S X|t*w\ׄ+˵\O=1x\J\##ME`k}5bk jbB\pypE.zUStłC=cWT5b&MWqIk*C7R՚J':F\mDY'~̸ٝq <|AaH-52SZQl_-qY:,SQ :+VVNS}[6n.YJ6f}ߗSA8dS\Gsk.:AsvMqDM2>qYQ /ūٟdƳ|ݞk;t7 ~qGUKFmS '??\7YjZI3rS5y3%&*%jǘ7G>QEl5WP:XeT#UTW`]5bׂ+V;%jg՛S,s 2?O\<@y6ՌwɫwzoUCI[[0 A)7·L%dNj69Z)Us|n|w-7m"#{!닋nz[!1͵+?^]sؾ[źorSW컧VDsh}&宺>*ԙ}XT_ݴ-Lq~YL:o3k6[bM}R&>I@S>k?G;oy/Ѽ웮}(T5x(^Ȇ\XBktޝu76 ֢szC)h.$=tIu]1;=n\/^ymK/^6 %}軟]}W//ܽpK^}Һzqq?_1:{4\p70Rs'nnGnW=緶{@nUԺ4w忭H_m< d`9~ɁGaO%Y~i xTuUbPT`R Y>+B'6FK -a9҇ Z(P gΓ25ym6`"f++H5q^ 7%SsR,{$bHʫÛlh91sYK QY& .:e9Sݥ S) ]&o/pWRw/Z}͖g} s #m3pɠĤ:$Xڅm"VxE\V,%* Q` zi4Ch&HҘVsVkgN~v}3H7r_ksk3W+~nmTi[Q#N?nt͐|ggzi~0F!܎)^N Ox5}/nߺ1 ûof>E4fB #fY\_?vgt0YV~~X j\ԣaCf#iڻ>aiusg|3l#\ ~ #sRTw̽8\x9hyB $& m ɝw-x疨 siGv 9K!y# i24I^*< PFA€ʐcY tiAmd:0}f2<&Ξ|Mz6CmZ,(}\'[5jQQf%@@@qTҧ,CiI;$˵rUp7gCj͉sD/4/W7̷6 xmBz%ҥW E)B7XVUERJf(?caZ\܍+8RV@@˴mL+97;f 5 1}܋^qluty _8!?t7[ŏt\ 3pRph%OI:,,J:;Zjf"51R"% XḋH[ID R RdQUg7#8.[gLO—;Sw<(ûLdd^/lZǍI 8N.jSCX| 0oD|]=ÿ #I.mv!RK#G0 \Ж}Ap# a#|%Ǘ14[gL4:edUwN6?%76b_m#p3uO1kQlقo;w+c,}sMnI>ؑc 9ZgxFw;lׯtv"-i ~3qÈ\ 8z/<0hsq R4 诀El C::E)&F 0mYzåYWx>J+aiQc Fe 4M]Y0iEz=}>rk0~66~Aۧ_o'0rӻ}C7- ¯z_+[iqV'PŶCZz׵V*Rw UI(&x7V؀LJDT &aItŃdJN`\rTҚ=x"J2J ,c9qTBm}IbuCRdA:"Jƹ \g@G! DC 78qd'rp\jq}]tv3ZN ȓ i~ >::S2Niv@/fin~%z05Hp 6UN9ka89R^$5EeML>{6jP.]zBh{ƤS* K)"p1xDސ,Fg29r;*WN)PM=})Z-1`@x҂QTP9Zdm$c8$] Q4]f]2! O߶.<ӢS复 /&XJl* =QN۸r籏cJ4@A20?Z()ֿh~?Kz4=2m7\,؇A:vD@.YtS$VoZK&͜-$Y/h(aPB9{]ٛGȆPu,4r'?;gkpm=xʺ>3yM'K> j:~77o3 Ԥ75B}E Rh08h"pLv!sRsy>R'iwfNX`#jzQ&9Ѥ +K3W"aNmfu;S &6|A~&qt`cr!P1I1sLVY^*ɲB;e1[XTf`N#פم寘<$@R#H$kkTa٭],M~s_Smf_3 ܆f/w@pVyQŤhu"A@p7*0NtP)K*S]1fDfE, ӓ V )ڜ5/I7Hնmaj8ʶvn g7t@O&a:l Fd[4$n 1d:elQ LV 9DCH!{_7 (Jm !E1R5D#\4. p !eT\KGøB1&cqvv`W"܁Μރ"Ƙ)D%A)a-zmyRE=#kҤMhbȄ -d2dkbbA$lF1Hu2ύ8aK;a<Xm~hյ:YM ^!3"5$25CIoLyx![teBΜič]ֆK%";%L,^pCE# UFE9:͒C"lyg;=;=!2H˝J)fb>kvQJjeZ&>:k;x x> *͎ClwV1MlVaۂW~$vyvltxn0EG-Sޙ&8`\2jX]i;U JqQ*9mf`zOV}/2-`Ah|== Vr_M]\cGg%hMN>1T1;B.V:0>cD.肷<^oIt>t`=b{3m0DaHN|YFYP2ͅF52)IUȍ6L5oPKvzN Ut M `>f#!E-K+(EYYha=q,eY$(|{~d'-`*4NZmN+ ']q)E+f$+BKT Y\<g*W 6/BP$ZElUtoΛ>`Ζ,_.VpچEf{]]j]z/w޷IX noDLo"po{tăfivK৾Tz4gtdNj_1C~IytScȚE+4+u}_eld&̻*]ҞŨxر3u.懟Od{d"gfGFj,jB+Z% S0jʢ{d4,NV{ ~t'`2 3#AXp^;MN7܀u{SU,uL,XJ0R(R$I6 vAjV%NJ&9 h $@}QWH=ې{{x,O޴SoxKj{%WU^l^l12W"Ĭd(Aum9-h&lfu\7jQkY+faz;Q̴U'+Ρpu< qfA.9)4up#NgHBC,ǔeHh%U>E+j쑓rNoF~xX1gS&Ȉ"I((t˾݆0 -Ejʭe0:3H3.8)N-A)0T+LT>%.+i@b[itLB9Cpc9JsTgO4䚒Y642yüѨd:eB_bH`LO+3., &\Xye,nIs3f0Ef5D> jU;10t9X*@4ȑ֩wmm 8ԗ \8l;ȃ#}Hd{S=CRk$ZٴM'awuOWWUIġZ@C,hyq i+N<=1} ^\B[3x3h4@`jۻ;,^K=9bT|MQA6 pu}>s |): 9 )[A:ޗP - :"ɻ(! F6u Q[c8Ƥ}A[]@cNC($ xmxN9["KH2Qށ.Œw,o aGN6r`"Xr%"@FG/2jr)XPsno[_{pps:cI;Dp^Y DsJ@Q4&ge/(UB)j hcRGYY`6[D:dɞ7r䳆YPǷ-T+>EI(H[<*]F;gmRpzڊɢZ,@a,' )%ӟz@*z,G\*YBR 6BTBBP7VoP+.{;io'}D'-ؐ 5H9S6-HUp32d 5lbPG/c˳8kMܳeOvE.rߍ*uQyRe,LJF3nhiErϬamyD9xݩ/󩞙o{MNZ*1ES4:Z2qG % *shKc'cMQ&W<'" yL,+JcSY(Q1 F&sߙz#n@$Ganӛӓ>+2b+(G+[wvšۍ9 P~=.{'Te%1$ܢә!U%"ڃN IwY٢L+w !l.ڦڥ۷;4.zx*m,Q}46Seǫ*pL>N(b”P_{:+Ɯ1YTgC Lڧ,}Q۟덾Gt`v݋Y6]9K]Jx'Rʙ~8BZ(Daĝh o pYٸUPhyH}x˂^o-;=U"ZYj0GMRGjqPI1k 'H9<4[q^8਺V|E8 a$d`x < w% zuojhUe2׆#s:(x3\iؐDpUa{XVN|ekW|x_j!^\kkdue`v1sߎ&y<:>iV&ɏg˦r?L'O =vu#8d'2EO'w^\9:-ou v{Av5VLM5-X&PMsU!?մu>FoiTӚml4ioWq4;z1ٷϿ+f?_>~݋!vφ/Co4KYH' x]]K [t-E^M6>B%6(ntp@RU?f-<(fǣl\hv$۫mтTvDCשR6TԈ+b#GC @we' n,3m#+_f!xBHd@8̌(iecB$>a-v01`nPg%)EQ1L.ٌIK9Z4Z3YndJtz٩꽭/}vvu a;&lHͣ#ᰘp{i/m*D蕫k}GzF9"suPY("`r; Eu?.+x\՛:4mի΅u-M4]n{x?=%%j0Y$W,I&8idt ܄361W63EƩkAUoNpR-鹢"ߐNH@Pd}zuz2}34 +y r{ ƏW a='w뤏P}!&߃R:rr ̸YҗbT8>6{k z )/"WH d[5Q@xy*o +a׬'ͿO呧O~+Yѥt:0Ӕ$J{P!jq]Q)܅yK⋆2$QVz!-I`6|+l?d"'_a#Р"P`s$HGnգW`nNfZtBIYٰvT?o?mk6>X/j}j%2}gy6,Z;g?TU&Wѳ=q2*A::eWFU6xqܣoѩ-ɒ$Il7FT$^NZ9ws8kXߏ2wl}u$зpv_b40˩-d(ӯ G6`Gh9'^ʵ%ƼLȯԎ4;gGl>9AvҢ,1D1 5qK2l&r$Zd%{1F)k͘x@PvWX~㣟K٤3^|<;s'0i25?re';L3̋h^"l艚*z-wgr^SQ.xf^9ŕ%/9j-=h^{aAp(:I-xV+ ~t$*^e$8k mI 5.[91i-9 Dx3YwZ]%qDv\z׿'9JWꆼ2v<X_%5~g0h3qզm& Q^RKhbg~z=D<Ķë{|w6d\YIn=5NF{f#}~|#]+=EE"<1&؎Դz\>ocJMpQwQڱFk'aѺdž]hcM_39uͤyr4nO<76D&ɣ&˥YzJG浅1̂IXv݊괴a¦ ng89xDmm[q^b%U2 =0E4YŔ<&`=zEC:58162?%AXu?qaϬg録Ak/)Y2d}E\e/%Y_`ד)7tOri  ^ \q*I)62prp!r4^i_`&OǵnӪ+zP[ǫI<ݘT>Vg}tȌG(_>0K)wii(.EmJIfpۨW/;o~oL;k`MEVs%4 `T (7OPH17|ͼDRnpUSq8xԮ}=rPtS%Lџ^;OW&H4!ץjJxǍJKrsf-".K $d뼡Hu^ o@D.)P(**Ҟ){:\)-•"^\~9pU*Ҟ>HA{tzs3qZpu="qRn\#JpujW$\q5*Ҟ{.x:\}9zzp%T(yrઈkťIyUR^ \ kWE`//pB3f{zp2.H`UW^UVuw_"\u֮rઈk.uHJB^ \.qc<ڧg)| R |6G/ 8}xSӟQ*'@x8 WVY(o>E{|ORYa{mEeG{('Mbe* ԓ^2H_y_Gyf`lfWମ+'h. SFT~NаCn̼_ w9bܒ3Tt;Z,$Uwos u), +VVV\2 c~:u̱suOyb'~,V/cNPI;!)Pو9 #{=cLVsa섍1E`du^HCiKQr.9f^]*^cOK|7[t7/OKh*_U3P{yly]n[}L`V>վVVU`ꒂ(/f硈k/&jLt}HEwTjݽGtxjOMTރUJV%*c j&rmC>l'}w{=$7Ճi$Ǚ9aykkm,O*D)ORdxn&4`wJCZ"g98=IƦ(xnrk<&7ªt< Y2.ǒEQFz ݖq֗ץa{oQ;ۻ/>n@K`,ˁh+yH2 #&BReRѪ0T(1ܥą`Qң :$VITbɉL9G"l+sXz-K/R2ˆ4!Gc sZgKo] ) E2bI0ԙ~1K!I7PZC.+95!1 1I^xR3}| \"EU,zԀ%vU+FU6٨K6 AΑhU6rVyH(v(7lռl~:dcJ=m ݹMm=A3^hIr Y,[Z A\b(mda= }>ݹ>ޡ>/m!>{!13 &p 4hwAO-hr .^m )DJm2f0b WiCӿ/qX0Kδh~e|`eO t⌏ I8G%N>` FY"O{OFiJIsm"7,۲R[nk.YpogWQ,Zٖ")DJPcǀҘvKF;4B m'ENxAXdV`22ɬfL MI2gev֚8YmWf.GQrWEA󠸶9m-bTpzEI`D9z=MMxs dˁHQ)EB S,AFi!:yOZZa9Oa0* Xc9n&Yyk1lLFF*8ZEZ͞u6Xĭ}9HŌiCLʺx{PQ,wqjF홣Z6) iZ9%qTd;S&,JB=nHcH[omlwDW]1TTS-z >T4Rc$ \,htBRtpL[!x$Im$k/hd هXGkru3YNjZ[5MwDMI4gRHM)|`KY]fr|Q1=4`e|N*E ҺYVʛl}N7@As! F&hϵ&|SRN!x ͜=dd{ZjAm;yF`vy=ֲ$8N5> 癨 j7}?6o;[ =>*t*;ΈD^y >Ș)xj;ӹ⍥&OA_G˷I=703 ׻}ӂzu/|IϠ}Zb2,%O`'jgQ[CP ˞KLja36v3/ 102,9\X DU'2q|It.32mMϮͽ l>C=)uixmN Vwn_+b btj)־["Z573㙼JV(W|MG7}cl+%?.uA4X ~Y!.̆;n*|))&K0)U*.D nӤ`?g6l"~*H8śtcW1u~N"nFq1㿧sM,+zq -)_|sP~7;o\4z<zVgnohNfAq3 "z9j;$nl:$d:)g²>2u'~I|Ǵ 3xAH7yƫT_kGzEn@.zA ?nUO є&4i?Z Bn}7#hJIŊ LƊVڲ蝗O",Gy zĒW";.yVLPJݐU)eŽ0:b뫤 <`f,MC[M( ]8ua>&J''>O5KĶ^{B?mq-H,>&"Vmh8Ln=5NF{f#}~|#]+l>z yM3D[O$x@M6#Qx6=]uũ{nv];.plE65SL'Jjijjo\= Fg27QfeFr)e)n]/jE|0 >F'5cuY^t+V҆U /x(L#I(o]*5dt* O \Rz`"h)yLzk|]{oG*Y,.#%ٹuÈX YC5E$e[we IM'ejf8]S_UWWUU0olYUYE ƒU\euG)S|jGq(k"9i Ѭ:,Z?1  UU@dlHJz`O}{ pmEwuTћ?$Bn c, 9ϢYEP:lBwxMPײtG{FAul5J%*#h"h"X$ |wf39پՠ'ГГIk q>p )}S4U s>8&tu\gl5i 4NMVp91I#S@F _JrN?Ipqjvn$Z@J,Q-:N(OK'Hm;KȤTYfs0{N׽K[+:)j-tzz2<}bV(*FW- ډs4q|+7?L{9!_3/On{jb.0 oϛfnoGzѰɢ}vn m$ZGqHmða4sY^ǜ8 g0bz>ѓ1骇ӦuTFV:ȶQ۞d1$\F4id}5*(:Ǵ7j㳛D{s9sd?~|ŏggO|}F>{?^xO8SY1@w'ᗝ`d>~~Ъ8|ha9+Գ d\][\|@ޘڙ /0+1RI_iAC<:!XX] P8<85L"ZFJޢgIN&$EJN*6w2^T4Dž@uȅH©I@e$gSG'¿LIEx*]9|% Drt=VmTyK' }@u4?|$V>MjOOt'Ë*0UN:@)RLI2eM/4C-i)/u 8V`h:2fjD e<9ǘ$]^m^NL]ֹ\V5RHF%S69k5"^& b(pd@@`P'd^QJ%Q뀎ST)E0 AhPe+ZޣSBza~tyFw 貏F'Iy5L~nrV< ~$۱ ,*(*h DU B]bB/F܉6[w3n4oziiV[{7/fE<5 A9*we~!JbΉm8N$ٻ=&/H_?C&8TџSQ߳ RrB~ ΋r~3NC(w5]v\ɲ?<1Yt|]di)&twuybӢb2- op5}^u5UZY%l6-oҵ|٭Yq) F2Oc[q10_]5WY Sڹ}Uٝo..3u~[W懜~%g|ϣBެFэZoƳD=o]+&-XsK7ޔgӆm|E۾DZ],[q.86C8H @4&xwY"eODоB* T:tFKNp?fF0ayD ́Nr%+9cA 3 ]D$pv[T!uH|?f{*K!Hh6NKT *ȋ NLcnǢ$!l9DS"I)D< bH@)E/,Y/mkQݸ!,'_{ara ~8p~# m=5hy" !EO(>X^ĐR;ϥcY#J!$H(S'zP!eHec)U1H*TMJkb׌J1]X3Յ.ԝ.ܫ.\Z͙^edȼ9azr=} *;>7F PǸ" uQDx4(J1.0EBer-4dgFCRXV\*/IvDD02WtRq)rkl7[.hbܱ6v`WqLB49-(E= uMKcmۈu!!lR*Yk>>C̊P4b1R#)g4b0GB"P1Vy *}v,x>i%87yΌHG$KXz5fT   kb׈#jzq2\:Xg1.V/zvzӋ+9>!B%`")԰$PIrkd%#JJ(AwFN/C/>,CVXnr*`zW M9J)jd_ُws_sH{<jQ\)PJ(ȾG{MEw.~+ )󫟗ǫEׯ2H`F3^jٚ4c B_*(`s*E `<$2bFXp\ \zV||&9hɾx2lɬ H. JMq UONTMw]]I8fCRh*muRh+5#ݕ'Чpo9C[uzg|;jqr9kx;96@Pu^QĨTبHe sS>sP %>RH iYTseL΅:1bl=,mbzKfJB]YjOoVI!VT-ȧ,M" .c&AYQ0  09ᴦ(FxSo 1(Ht4q.AAk/ .{Pp5NXIXZkEQXQJJ 1aFč #QnJ} 9n8Bsyu9Q  "ĉl4pJi| :ݛ ;$rIh}J`8J1ȖYYP: wgȁU.0X9oəկ;˓yx9:z2u/?5Ѷ|ɓx䅝J8}t:'Kk޵m,Bi_/oS4 5:EUE;Kf[%eia$ŝ<;LW5YYýU:nX0.~.~U֬zyo~2kQw* 3; QCV:OLQEG`ed'X/DO ˓|{׳D|(,W3 m&HSks}"6n&<̈́diwL(Bsa~IH:;mPlRIp8fޟs8 }[=ljwNT5'Z#UƝx5m 3М6W(%Y OAsߞFWnʹvN] u54-E;&"8yNY* NDͼp"C@ Kp3"5SDcXDpCX,-,xGI4 t&Ĺ(N{@ܕ%] -B// =|)oxDjDL;$aZSG$28#Y x7h1 DYmdK3םצ쨡\V4t2ֽوb/Nـo[7ݤr,s`9?5Jc1FmP&>ARLZup{ͨ D(Q! KʨRyPPESDڇ$Qf(cJ#6)wv <E%N| *t4re۶a0G<^bĆ&ϮYMdzxҢƱEGO@i` y< Ƙcl26EmI 9Zr;'$$"ڙ{KaK]HfCX$: T9<Eu֗61PW{IV`T6d")|}< l%Կns'.sݫ[u]_͇8پNޔ8ۍ[|n)N½ک_}nRkfєIHŶTEb~4;?Wfr.MhŖ4B+;sIEJr=QN=#zkI HSk8JPΔ*KsQWIڇ WWI5۪/G]ʙZu8NN݀c66vixq@[ x]?'Sշk$ʣ(g*GraNɓCvB[(RYrjV8TaJ}ˏos.{~x`dkkE$Q:h !zH>EQ""B1O\:g%hv+|&X,}blJߵ^Fov}0o13ݒ~ۆ"ƲWgEXJ}e8k <1X'1(NL0,BpRRXsϐZwco/۫Rc$VR'e?Ø" ¿-sRQ7t0_׶U86_Q%V!@3=I[Nc1/bfTbcW_[N nB}|KƇU=? ـ$.A'iwZ Jdƃ朆?J\<*csD̍(8.ײ>6C&'@N|l{pB5f i}K@:֔Z k6f OԦmI\q3wbqܭ.`tvJ |aaKtQ1c.&6(YRW]YQV1KaRcCow᷇w߿/h;u" \+ßv(Z6U4(*Eє4YyoSj5^>LL!!E(fbADbYL O0y?>c/6,T^^Ξqa =QPWQa2Vި1:|t*H!f0຤q4W r?AΝwv^F2UIr.êD䓜$JpѠ0,ŜR^7 Cu~Oq<<y,![nu '1beP-`HXϗ+*=D\y ($<3p8OHbaOF߲&ޓ5vĽzƞz=ibeUh1}m7:gzO("_M8+F9Sc1$"g3g#*fHbJrҩطtGQTm#,($)#`V 1A%<^OD/q>Nﴉ(Ǔ\ծr3ߚd[.=q>7糓E-'~YGq6 J7 Ap`"ሓ:dƣJqihT /*kc&PfT$KIwT^L31qnq>cqGƢźg;hb嘨q0k|Ѽh=',uE}m8Z? # IUvCJP$y &9i%4V)xJ(ZLET>o~4J@b~ fz)lnIJs'x=F8%X)` My;R2V#[*'P s "IDs>\X9M-% X0򚸣͏ 4Ng߷w*FY9h=юČ2Ô`gk f[WwW}ޱԲ%oPʻnV9k7Lyff>e Y` ZC(VD~U0H}7h" P1Ui)B&3&blƽy {diZ|읙*b???qyTM;`X: M{*(>B5y/}._>KU*.L&:ϗ?{Xᠴ:<^^g08>Ť=0.F⿉O12P xR W S¬̍dɽ؏0Z .Kb!Xvx܀^p<|z Og ʏ^Q Ks0)n4.>췓0 ?_+_>{*K/-(LɨrP~:yk,Yh67r(l.8Kd5wPsÇʟ?|tj1ysZQXX܅a^BeZeϞR QI:kC:ں5m њi0^Ӳ\? 'w^{"M1OWhu'Y6O;IcYqPBqҫ݊-L݅FNѤ"?U¦QBQt~v.g%P~\ɴ.@5"`.vLԜ TMswd`F݅)S5·wm%GV?a`   ~ˢx[}yIQՔ(~Mf{OUWwvѫƽ?pMo 9S4~*ُi: 棰ª4=W&I@\wV " < |+s7}'y6_z^k‰R?M>u?dN?T.;gfޝ}4+K_=|CG+f*xw{]`eşw2g4l}ܗ03:Tz8+COi~66pcf8 Cnq՝nh5;7mv]bl<7Z $q<AC{3nvC.j<|C\};m+jD75?ܹEVޟn^xv9o=GԝR i.ʀ4!=Hu$o 2"*dJθKBΎ$+ mTѠ;pc> b .w2d.手9|p?r 7{w@|'0izVL7ְ;Qk-f1Zx0 iL+uF[RbȖ[diJtiQZO(~< I8)aAc,0VB2 -dPEt4 HmE@Z[9d_pIh x)߾އrzoR+9aG]AwU1Fĕ;Ш C}el§"GON˜njJ+putM1*!m+ҊX|ӺsL{UoyfQUt7 uigo\ uT*c0ѬeG lr'o;0wp5V(Mw ~^܋bfgo0F d":5u|HN67=.wKVw -zg@I>BuZL˒BIEg!HтWOF9.řR*0Ʒ/pC>9Jċy}h[sj Q(d4D$s.Ed ]Zg㚙=塏)/g" LeWpX`s-!3AL4^ |1L3l0Tk3T+fC@n?>}ei SEGܨ#C1\ ;h@Ѱ(8Μli "wZ#D* l B \2L݄Pʈ"s`QNJs"R D^dH"uI2 ^5LW)@J29@C1#)1;dVQ79>VJm38WM:˯?F &&gv/׳l T]n?ũ ^Kq}40lK9G~DK֥"?\rLKc\xGYqux^4Oے'7_lƱg{ ?쓟z3us;'|WssW;|/k88[XvtǀPnw(\V ɟ>tVq*t^ͷkpD&M/AY-yr Q)z5ڋaKv9lT|hz%$k]߸^koV;Fn4m].Clvly6u7oPh7w)vMo0L6t0ww#qnxݭ=Uo Z/Jr.N7>هqs 6PV03m X͡3m!g8y&rֹ_e _>1&ar::@BiԱ>rliSgw^9EM K!=ds+k| 2LaĘ{xnsȾe H~tGՊ !XCZ$* -(%` (Z+eFgaސ4<e}rPQ ē*690.'j3sOܫwnk.im2VfL$)xwnǣ]htUc'A\X 7ՏT輴PՏg)_MⲞoA٣9mċQĦFDj2r,{tb";G28(Бj9 :tϬCɃY@T2c$f(P+S!:eUeɑVġ4ٜR$uhZm;'_~BZp:iBӧSy: [ڼ[=:Fz3?|"tW 'E+*H]t`P 8סg5K"u+`BFŪQjb՝q6JT}#49:D՚tT8NY. ՓL'yȝF?GA~C*!$wy9T3G(%.<&<li:3r #IS@hRG T n#lqNy>]^~g;/ecӕ)xKQ !X<%\@9xc"NSX!ئ{-Ĕ"LU*ч y"i)A$BNQ@Z9'u_*Ѐ^dcSu mᵽs89{S4v҇-cLc'~~6O׼͵ڄD`R HSG:zfl\2xqGQ|D)?9/ $ҺP2*U \֑2c0)(-%lKEa RDaUdlt-2&)J92$ĠJCNJ6Fفm@uN\#K@N%gROR/[kBDqOlP}-%D}|L-!H!6)Z/Jc2УI)lEc᠚a8Dǡ!g\@S .8ӘM eHMM^ևcZ[rL3RD< Vc秮sI\xFz%D1kKY g cLa?2Be. #֡01Q(ͤgnMFo I8l ?{Ƒpm/mb`Efsl|F?-)!)'N}gHI5gI`JLUuuOua1$U86.{UG6X̎UN1٠ 9fP;M ͍pHXll}^\=cI/R:o+ >/i903_D3 BC } t+Il.($ffz }>ٺpp/)KSą9n"6JH<o4 qڦlvV/R4 mLNy4.jg1)b Jo:ncp89Mjij:!5֒WvC"q!xAyPmg#901N.*dpDk)\ȢU#'pG/"6.hJ_d/E[^[e/[xou'/<^X6ơ#sÌuY~YEٕC5Zg>=ӻ&KEnlsW`هC#h<<.f^c<}^47v4% nUQsAdY2y YJIO>!y2s%ϝL\<[2qH2R $ ,xUVS(QFTC JĽ@ T ~r!IB$Xa39j>1cJ*SӉOyڔL|&=r9n>e<RsW?| Ԏ4S@ iHd:xQ2rsD^p3)t 0|ޮOY{Ap}\ni>ӊD'щ!&$$!Fsv` !2:,3W9d*TG#u`sr>t^DK VH Uo Լ&NoFsOh=b.?.}Bsm?OM8\?*SԾqU)Л XμYp ʛD o2JF7 'fAQӅ׃,wN*{U6cG՜](,&X ~k)0bS^U`N ˼(&G#k6۟wl(/(߼<<j*QpB*$^tvB&ϥAyч ]qk9;z_| Q50 TK‹?_(i]4A{DIiI}I%sКS{vK!߀'2\0]2ڧvfNoqP XκBWmR랮N!x Kޝx Wv*$4t%8'("NWVt\sd y lw%v+վ;!ʀ ]Q ]D*+/ YF:\x/Vra\;}*~0DF]&KI 6iJ X|x /9V3-fhw\ ӮpsF+Z=7 7sg*=k*v(  #]!`Cxg*e+tѶ.wut% !:DWBIg*eyrv(E]"]Iq@]>5y ooaP կMo o[mЛԵ޳7;}<2\ˠ[5qΊ?x{]|JvY#;Wѿ/giEN x4HK=MU˻[Tk{h;guoJq2/U*_UB:S%'OUrU?fϋl8)F94Y&&W%7Up%}~qy,$şŒ/? z΋W5{~5/MKj5<$os߮WeEog]_c ;8E_$o;.65(Iyܚ^bp=Tec/^?tr D`=r'ADJ9Bs16TT Q*0T'-xɥ8EU(wpJj;;*GZux8ޞ1g3n5T6T;7xmRNO㛚8<{){݌s[,6>60x_jJQ^4.7024LI$FV ;{[M̚6hUW+}u}?V+OD]s_|*_5E:QLQr}m WO8g4We\Vx#ϊcdyP۰i&RwHR gPKsSQjz0YQN <S 廥6l֋4WK.o^5${4Ⱦjd&O&sߦ/1[Fs6hJs}\lS{e> ~jі"\z) @2,6>PG\C} Dj%kͅQQ^ }ghU ʀL3!P-Wm(Ey!PU~N94崝K<ܧ#mr[dj>D}ŒV}1*k,{=@{wByzqc)o301K)z}tv[fUP2&3撊KJϵTJmڌ۷[FL3ڝ\+HWfr2&INq&gTFu3t,1{t;l !5x}5_HAs{q} ]|_}ݿ#;e.~T~sR"O,ҙh褽 "+h`Jg@r"u\R/ER倗V$׀~=@[<pe)oey8.oCudROIQ'7('TFalI\{կS9I0NQBj=%a e ǂBR$HeGΦqldugTA҇*&P!A3CMP EDU c-`hhJ$Z5i&ƀR.xF^a3QMnpR9&עŮ*r>w.:]JIR%A(% K""d*COJDFD 6AD6-ti 311&)j3Ixio,У <$%R%w@ؠ {~Ú&ӠPqUv!*'A$Gmb0˨s E CiJj|DAU nCJZ%o$Y OP蘲zhMQF" lu4" 1'i8 6h -M^RdmjC4 eSi|e7{ײב%v<'_Y»Y7n4um4iT=}%3U Y|ʼn̸7O̓fuzie,{ 31fHnɶglK?:־w65enadk1w-}+`;v+ diB8ǘ`M DL'З}8uг|,֓ VZĦǒlC3 Ē`PYyV)ƬƀYdBHt=k]JCu4eXX%ҫ]P3fuDˊ? *!s \,NJb TTPtv@[b Z 4AsPX.88w UAutC ]Yɠ-.d`jcm-rhX,Ľd] Dqkax&џזx^D„&϶> /y6Pv 3| w/CAqUTR\k[|[|'CaZc-n@.'0&ph d~m^, |J0؅* L!p XK 2ѺFPB]ځvFmTzJՎd Z #h] }Hd Q& qTm1puqU% 95Q-1W"xvMl4$1;30X!Ovr?}{}fT>Xw u S|Lٴ& '[е2"9P.m,5m!/V90(T&%@_1״`#Ri | $$4ÔA~,sv*+m˱8ZЮԊ1;@ n!z=xJͷ+!,-efc2jƥ7p&X!ٔ5`cx,Y$ fzakZ. py:Z'xw؛G7 Ҫ2,jS ctbU6A ۉ;qPk.BGZ!]gѝC4M1YPY+"am@;/[7W3\D XH̨I e ? |(* >оzl8Pvj7~ ߔx ΤxyL'0/Q lFP9b. }lH(bZ(utkiNZK- b\]mWy).wd p@r%׎j?Ԡ F =|l{H.JTV1 =Gzua/l+tO6`E]q("rAp1w(n0;@ '.UN([12@>JHnĀWuk[s G]u8f3Ux hN~8hcaVÇ@zxUTABɷdݐ3ma1ۀ!+jkR  tMD{gYu-"o{-pWB0@N> @Pļ`AL—f@X 892 M F9`{%cdt5갞Qpr .Iv[1p4L> 6|XuΖ^j#pU]x]@\N6Tj"h&= ?GR!TS,]q@B;XNW3{ PsHTqU/r.n VиN"XsLe'lרWnݯ0?}ݓOm;FU]aE_`߾z \M#.y.n[{ [/tU7| |1>?oI?|wKUhZ>ZoO{q_->}/䫧_?!}ȯڸowR~,衇DsWb_HU$T@) WKn$RěOgMaI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&ulj@VD4I y'fI d:$9(\MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&7 D)Z3SH $}'Y&|45 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@:$P ̮@0I9& _K+$&1 EE@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 II}p nGr w/훛g@S)<%}}^p)AJ!b4ht1NCW,tl>g)()]!]Q&X i6yNW2X3+!XYpiJ&69:] JJWgHWf+NDЕiJ͛AA>KP'+4MU틑x-rI]{]=qtn+Cfw.^?h(^cR$K{?g;wr1]6Jq(*AWIs^ Dtv>BW6JPtut\ByԕM:NW;3+rx&3HN0 ] lg+ NWe3+vp6YJВ:] Jֽs+OLfG]yr&ML:]R:CwL[zЕu4 ] Z[+At*Z4?xӸzu8^;bsq=4-9ZcǦ]YV~0-\'jt_ۡTy4˘G[?n'OV 6nF̤ uCbPp8<9q7e6JWgHWqDtlQ$p_]W+A˛DC/7p'S>n:q(ΜH]nqJW;ֳ0] 즡+}ٽgѕuft%(ΐ\y:p0af+Qӕ$UWHWoGn4 ] ZNW2F3+NOw%)tu>t-yQ n7.eJ[|Lhо1s Grxr:kͳk,Aк JP7$ q y 7L+#hmN(]]e6Eϥ+Diuo?JW80pMxqp9:.@rkte>w-q"< ] ̳zNWrki8+ŔDt+i~|ΐȻw_OWy P2:ztŹ&M =tp%mT{D)uGWO_5t8V#ytQxx-9Ǥa?sLUFò):7Uchk?ՠѬoN>;,ZGz=+5}4$_= vJp2࢏fIj_kPoQט);'JO>=RwOѓSBkZDWJm*e-tʓRuOO\ɺ"\Қ7h"ZFO"ʭ*;j;]hi~jnfpz\j=VHz3ޠ! ttu׮ [DW'.}}hOtQڞ]}"NZDW2po ]EZ:]JFhGWO󟷈0fpi ]EB:]E2W+8xt+f*'o]E3$kX+q{"\*BW>y P:zt%`R0Gq#\,BWc' ӡxɜ'E|뤬Γ:$lt:DyT|8K[k$qXn͉hDssݠA0V f+%'D],5W5OWG_o=˻>]GWPB6+]#uO/]a"zyʎ$]Lǣ+Ll ]E" lR񎮞 ]Q̅"-+,I{UD+UD);ztŰRM"\hO"J;zt MtƭUD+N~*T'HWwi*e-t>iPκzt%1GJFÒz`xzY/u-:Q/3jƓkQ r:CI?I^>ZH d}JiZMzkX<9[4rO.U'?OQjyrOѓchgEmVT}@Ka1q++1)3,Uwbv7{mn4+̕?R]/(j,e$>?79%H}z]5k3^^nx=]);/ WUΪ:<:9lٿ;>"&lBdn1^vŷKٯ*3*:;z cRHojj\GKS#hycdv - }L:+wJ/ldX,p)eTiRJ)^8OIȱ]0[}d8Q(bTg^ْ]6]?8] `4ZO7v6e.lͅC7(/es/.ϱm^h/"lh۞kLTQ3>޺=+3=je'GAg2ӛtNh=c8d d |pJPTc;*2B[tb(|KBώ3=HfK1]T('`;`TD2+Tfj%1ހ"xAyUO\S}, }e;eQC:YbzPbN$Ng OSx0^#̄@,+މk M(J5xkJc* ZHl8тKO)HFj֋鸊T~zf%a 2? AS /FS&|p>2&$ܧ,c*(bl6Y(x!iu≱~f4Of|\X3@0FϠϮYY|jvV;sj03}*`8I$o|Q__TpubK7 OmD?lӈ]$$ݤȀ009>VeD0KA)of3?gs ˾RYͧ#-R.-p'h(A> (X\;8>?~)ILnk_Brr}G0#J}Y \E(sUSVmX81O3*֍0yi:'Wf ?)_;?7-mlƫ̥aUmKm&G^DߟLnܱϿcՕT-%]CQmayZ|^-[1E>j٦2T9kK%RW+ ֥μ]LJGbŴ\SbHD:}ZE7d*@&Teyu;d?~g^qWCLW?>os,u!z/Eps届VES(czYrT&6Cbfv= DE=f˟΋MFf8ٌgt;lP]fQuTKWR"qbz\)!=Ym2`{qkM4H K~^\ǚP}LD5*նeY fQ: [8Fi KZN5|1 }m`^6ge A[ vJ(5eQZPUhf7EqA `F'OGE{RVPAD y6_ć\VY f~ >x%qbpO,f4϶sY8H>rۗJ#8fO^5,kϓ2h_\R|h*a<`Jqt;x~/;~:?=8cR 90jcҡ!$ۓBHؼ%] ^.C6%і0H E=\tz*=d(ck &Vl$ K@!chi׈9 )jhpZ%+SX\]#-cW|ꓸj%F>O.`Lp4AIt\_^뺾7y!i"} ǎ؆_pC;P]*̮8 7ǤgbqwhgٕuvxL:!yX}c4o xs^f\XJEwmt/MG{$SO ͎o];MU)kW8ǂ7H C\#qQI5D{t*O;*SJ'N~pW*.wSF(`(ot4Xc Np Qd<0(VMhA #(rHX Hyhp&^`>L%/]V[-B;]Z3Jnv9y;˫¬Us/a 8#yj ;x"6 F1 N,(XS1R,xP1.X- U!SR-Xhm5FDYtR­ Yv kg(nw}y'dƧ(γ|$R%YWyo+V~Yӫ:: F=f@=2"Oԏ'G{+g 忶 7+Ay3ɱ`L+҂SFDz$o.S2݌~tMJ0yWn'gG:~I&7ìOp_}pqb 8p8cnh2P dUHS:FxF:bxbט8naV2TWHPԤ qqid{9LIޥX`KI0 KL Gs;1K;nɯf`a;@v5A; DmGw{nT~US_Sfc2\s̹=Q0(BJ0c. +fPDR\EGOz0I&Ed^QTfR X'b^bK~烷s;3 raj6ܻ&;@1,n x"6 R'˯zEugw6S?x!IlE2\,2|]NDf {B#bdS&gA|1-EaIe%5޼y|P~_6Z 5'|-KڂcFV.zu m2Vi47"-PFQFzG: Hj2^mm:hJoMKxe>D]+lƓg*:ѦgSh]moG+vꗪ~1r悠_eEg~3CQMml6kfzFy:Ng"3.Ai\tvoK"O~8,-{K7:6R݉ G[Н@dIe[#J=M)R=Dǩ4Y 0Z?]V:E=^N&zYӳ7}\{Iç5i4 Xe$\8ivVuŝ"~t%˶3W{O OZ^#G0L">.R?<Xrň-. ƥga-s9ܕ`chx`7n GmWE-јg,n֩S}CXmO֝qHnFzcg'7]ꭇrR ZeuY((Ÿ.3,+ Ti2N*69)WJ(//%R^Dܮ7327($[v^pQ%,D4LduƘ7 `!6GP]M7d^1P֜m&Ǹ(NPa$7cqdpօgϯ*pb-(&484>Xh0 I=pb| p+%̬0ZLJlnd=AAs4 sd^+pYf6Kr #zF39rYdp xcEF}L1khO{ё/3ՎCTh0.2ɢ\r15F'Q,H^G3u{&7(-cmH˫<$H ];XE6`E"Bj !<2Q`h ;ȮninCq\LzE) :onjlrHm*[jl~r\cմc_m+[m;Xnx;2gAcIm{(NĈ1YuaFE@2!BLlML,( #(zI@NuҮv{jlS?ϲ(X,b5"k8Xu кvCf[f2p]Yh8q\F)m9!WpƔ6+%Ck3)e2$ 8Z:!.•-b5q[Ķyy1:iɾvW|]ܨk5$VdDHF)U-JBa vvha-p*-EQ]WGncMmO_)QA~w*7.@j?=ɴ-"ʞ{/k`x ]J}VJI mBep3MW?%{3=OJwE+b$k9| duq{/|> h{D%9ɵ&>{lCW׏6f-sg Q~Z0Y\ׇQ~者Ü5?wo~ouu7_(Eݾ|-6>On QOdjy=![ɼeɼS&*di+"wV7s̍P,XN,di#^0M{,ѓ=FZ.vo엾~Խ4V'D nJJG瞲s*=V/ߚ^8]7WC5KP\bJɘJ#-:H}2mp&-W#נ&xAse".qA8Vwaz֎UOO}ts]p BBq&5%)#:]C}]|c)S^gg&ө5N^gR2.e鹗^)KQy*)g2΂[%M#W9*9UA5ą`rnHolx)1ZM+Zk9VgGaaM4"K"FHw~6ƱmO:%ǂ<~ݲ7=3{``22-MjHP8Q&k7tڮ@ÃՎ6E 9c6тeL)o2Y"Dc}6K>fO,,19, !Ö^gM )ֈ,E"&\Z3В8{}y%R2('( 9\͑FihxR J!iYKoI: {&-5~33o3C`s fKus \Hkgncɽs(zgJ9mr( HL4Σo|`ԙh=JYvSDIЗ+ƃWXeSx586IO_oZ[kWr4>UzƓt9qeü!6~4Z6 w5=rr{&f[d}Cj=2O+(- ZMwm ZhWv@Z`#xIȪ<:Ԝ2WTA%LƋ/&1c2y,)z2Ri:<;h^(g/Yҝ04n %Țz8GLrYݜHP ҺYF:kk3p!+ 5Ƅ,Gh̵q# ݬXc˩r%fMOQSظ^:ƕ/~ oN gZwH! @LX8pW^ ;Ё55`o@hsY /Ξ/!Kd>Y(S IwMtL.;MTlDP,3S'4%1`@xІ=d1s%C2&5Zsz:C<;1G,臎w4qM2ڞ'Wgt9k^ofӓ47B:ƶpSL)#M&6htc J]#ӊϒX#3~2K?39y[X lF`.Cem>-ҧTv'|tnLّX ^D3'@.8B\ߎ?旳fY'{B6:/.R<+.8^w1nn/v]2͓Zx]!)n"?XGq}b^kZ\~0Y;N]]ŝEMh<͕+nˬ熖 RӃAC^t7OOreŘFf; ݧt{|7Q+3/Nlԇda).yߩ''2qO"y/0zB5?+k沺1/mdgCUմq!JR`6 aFC>rP8&XgLɸ+@09PuA1Z: e0J["1 <$i FhQdY4XͲm:X+;h˨1ZLa] Z`vy(nXC6ٗFx{_`Vz8WBG +| hW FdE}E H`hRpI_F}AK2-(p!CPA % / JHZ ½>$0 {3}J>EZs}V }> m]up5cAз#h25YSi7ċ_vs/ɨs1(r qyV~w*أk̏j/Y3fGZ9%=JyhOUچ }h ?>ÃE=N8YnefwJcc )>ߚ €u*H9Ns 2.V]l73M꾀żLd^w9f/8vKp۪uoBmjC qlyVރM(7^:dhIDZ[ȣGDq!V;TrRf rX9o"p' Qa0 ɕXv` 0pZx#ȧ$!rc*V$Q(AQj5q9:б82[L㥇냶,ZR')<3yt,x#L_"jɹdy .4,{Uw|3c,ft9N\0Va,%iBh BbA1֝UϪoqۏo/uˁgZ Uї1Bp>%JI!#(%)8(M!qL5m8DH`iV6 r*Px;/-(ix*brv*BD[DSXJ-U,2.$Fw^8o ̡MRv(ǗA[ VTA%oADEsiA{0rXEQ LdJ}GUˋ.ή ykU_Ic@0| phTFhg&'4\YБ#3KmjpMR~z+7{ϩ~g鶋><6YTjNSŽzRh#+l+ׂk?oEzh8OEz(A5";Ԩ]E}Mπ=y ǣ+,n ]etQrən]e2\BWO?:2J%ҕC-0UZCWiBBGWHW Etph ]A.PxuQN]]"]I U\Ygm:]eR2SNW%Jҕ9Jґ5tL3Zh(+쎮.4V>8|0v4{oFZx"(ŽZds8zF]zr:28=pw_ o&]-Kf2lʻ25&W|כ}L&2xչ$+P@Jx#(q)$eFH[I欤UTLd[r+)FޛJ&QEB4A8/ӿ|כKk_# c=FΙVqj޿݊Z&ݠZ2\m 2Rw.G7T0k+ ״fUrNW /҂)(VYpMk 3S'lNW*/>:j;]M/_8FQ=>n'ZΫJNEW]Ɏ5=Ӓ1"ʀ2\iBWMҨ.֖׆Ux[*t(]]] ÀI]eF-t h:]e:@Ѐͻ[Bf2ъƫ+iEt%sujh ]e5֚4-_"])i mpj ]eK]] ]i&xesi-`ga\mc@W_,Y$Ғ~W4L0L gأo@f|x djF@C4zLtD$q س{/>S䲺s\>Ӭ=-/ t ʼn*p'AF :#_ڲPmDQpV?jKs}qm7=Lpsa6u_v[`M02d!d6}&(0LgrjX*nQ,OJx~ŁsT[n-mZDF'2hMy!JVe.ixT%-Rn ]eW:UJhz Mpw- ΝY>"r=LȵPrְ"]tGWRUx~%UF+OW]] ]n]!`"u*hl:]!JTGWHW(BF* +nVɦs.#x۷WEWn2(SLf3[wvDzdQ$Ha/4TIڿa˨1ӘWSo Nl}dޔw77.7ʀt<+?[Wj+g{y߀1wǾՅ)$''ɫNؐIqS?=ݚꎬ"X׌t?:ee cE6lEսdٍp.)gz古11*/x߽]]pXY78vF/Dv7y]VÍ`IW8(Lg np ?eۣ['= f3^vڏ^n`[Y<ݻ{ 7y\v#j +ISV##W?ÛDYDZݧs|;N3|:-s(Cn-?_h}ʥLY^ה{ƽ ȉ-#Sr*F /ҹy)|6c74aCUO59VOe@T_c͆O$3g4#*h?$(1qcilN&n~5}U*F͆%–g_ P<8[[Lܤ|Oh& :E0ƺ _~7[Zf%ZQ!M-WU2J+&aj#yU[8O:/iy%"`V\Sod9%#(ξuG>H HZIz}| aсѼ>~7}m)籠N[R49~7B*y>f8qy,2C'nqEu?viNP3e~b~LOl/Rˍ,c݋OdA"=)w2q5GS'oi!TeS- 2Qޗwlu+)U~ɞOȸHt83gG~l72vc8k^Yz<"qcTԇxdi%94 OgE>؞$LR eTbWO76ۓ>j%6.InxpO (=Ei<gx;1r6fz`wgjxZހeyJkck\wV qK) Trҋ"8'Hyֱ.8b[_b:  uغNБDwBJ&//]_uv*~bøZNZBbd<0_an ]C `uW" O-)daIx9}U7 \4-JPqN>A.++;;=ewME@D N *b].7Lu)xܴRZ\nGT[j@{4p *wnA1c v-r?g5c]s0o'|<ήK~=Si9w#J fi!jxrc\!pƸ5/fcFJjCGn>N6i%=J @~Oh >^WS3\~ӧNjO:CVu.&}u `6~^d{GwS;>cuɋY qLP)-]gՍu,)R$%^>դlRuãYz^4N($Le .LH0LXĊZ.Z=#k7N,'/7чE.럕%(D5ňb0可yydKYI=!HAe!RLrlz$-DҐօ @ i\H}9h,A@ +D)f{CS@RPK\4`@!K2$,){ӧn)\M\N=t@,}q,q2HN1 ՘^z6VT&*{WSd$ϊ yS#mogņ uTBxS`.`G܆cC܊{%7cCYxFb*N9jhzG5KrM"1H^{CyӔ@~dZDٮ0{dw?FJO]P~F{@g,y vej=x & /4#--JIP<+9^q([<+lӛU %e=zQR\>aG?n+_,z7@uaIMvŭ!mm ZƧÛJ̢F}rhC5mww6NB:i!;5dHk3thkrMC ^Ϣ+rNz P V+s Kd!MV>+3dڸ`>6 S^L6U9D_?%@u*`Rͳ!2 \Vzp6R%jxJP5pհ=."QA0/θaWЎ꥝^ ]oŲL\ ՏӛFI X#EL)CAWW~jZiM!KLА!u3~D>:oڂQGL{0|ϑ%bHRV0b¿ԍ2>>|x>+2S`fbCUa};0n w*>Ix0z>VD!jEY*Y!UJ TPdAԐ!?qQ"|_p10 *u[%YQY[/#+"KZLx>+3MVPJ=l2nLxZrCpͳ2g!3ࣛg`| RDC7>0dS<'1* y-JC@f'+=*"nn,ŲzSZ[>=Ƌ3Z sGt=X(8"2O%$J2Ew.0q.͸ʕܤZ^x|+Q,]oF7( Ir)0f%(LI $qDdL!S`T(Y}e9tقz56s7ک?/sw޻ElCT„\ҔBp 9P-Ac"0O]X[yW}jBAQMj"Lj-:(mD&jҷ-Sbe AK~\J\S?3v܋Cw+4Kl"rhu'? !PޛҗpW9v 坠aFA(#v E.Å74l\{c(KD(1d2e(PNY&eS; [n /\2l ʞ.vC\ӗwh]q!a^B`&&EA}(@Ycb *c1oE}. dJjɵrM$OgӒPij/I Ne@jl>L7>aS"B 5%"æDT݌̂j tS"YQd2l>v.iC 8,'~is E+ۮDD)VI)r p C ň \9Љ_-lRYIx<jWY.ǣK5s^p!) m'*K~^-Ty5_ex,%@OP/Mc!zq ?q.>n>1SRR?ߕySK aʾO+],Wk^a2Q"}>VH;I\mwf]ݝ|I"G?A ߕs

}W/gLTj&bjYV HԹӯBvFmy.lx#ñ "Cx9tưU 4&*cS*}Ɋ[Xzsa%Q1 iEdB3ñ @xo8VB aa /΀.}/ 2^H}t8V֎!p/< $&l@Y|Vñ HTt=V}az}VT$ե5GcV vm\qȋIf\3C~n7(^3?ZNGb6͒[8mgqZ?b5( x14~.VNK}Sbг"eR!%O))8Cm϶1g 碈)&Ny@%h)X-JGCVZq}4 >ᴷjBIoe; )[b]tIH {_ r҉y@=}Y{r3 +hem_xT0Dh[(KNy''w3;| d'0@ǽ)]w.R|5& ĸsS lTiO|=;FMXI£,O Dd@"i[=RV$gjّSiǾSLn C!ӧ}WM{䣀 E+o2c殾^l;iW!%2&1Fm @Nv,̡N1dfY۔L Ƚ:=7hwݬ"N @od~J;g݃LEf'D?؝yZݣ=PҸ5 l=jo!`T-{yvC[`(P8KlJ3FY&J$;\ncUz v%wq,Z+8(>S%)R\dOLOf_&8[F\RRЂSM+JɻBvf.,C3AZ iMc.8;J9\!`*moQ!$/6qRYHʫe_,Wϧ/>2!H'󁓎z?(ŽVW >D EᄁnKX׈::yc2;_o'ͩ}n.$"w\_U!S&qh"ۦڔ~2 eGQ?@-vÕxj)N?E"+"\scsI6@e4WG=1,In~.id;9%;!(xS9\p0]~X R_.]zmnF!GG2HGvt)RۈM6W7EYr2`!a q*ҙ2&̓*} " q"hDU"RJz:?;98zP"FByQΡ% vpbs8ZCkm奔m<MJK'kdE]-}Ͽ_ 隧=(BDdh"0 y/m7Y Bg0[:M`{եmqr:tvjGܯ\(Лm\8A7;>Wkg~yZʮ(wB!&N<ު)Nd C2pt˃_D $/ŵwujUC <tIնU ,G8=In{-jՂ/]yL ұɲ%{h,/I=Lnܖ,{:9hS(9`XLV:4JNGsZn:V,yO:u#aS eLuښaV%yJ43Zr9/E%΁uC:_ܚ?-tBRjH$P~]-.]N8 {seԉmjn- _."M &2+Nqͽuc %WYOפo/kS/D4&Lf2zv D$%RXy;%*sF6NrjZNu7`-R;|U+i0+E4_>]=9m<P$m q@hK>/orSt ܤv,\FH 'NÌtM d]w$.um$\PP6Tt% Eһ޸}y#x`}w Ϳ8-i|gׯVMFiz/!>tr>Kmu8X2)('xQryRs -)VB^_n]FΨZ ]G^TH8eBt gy*.`NZEm\4_<Ў 8"Rl֛D2 d|icѓ;>έ[d6G2BZ%a#Qd>[| P)EBPw:u09gO)T|Ф aqǃ1Rb ֑` #G9E:Hk&HXg)E`W" vaG,,ҡ<%6O8Y Aib5{L(6/9Қ_;p4.5zb gPӷ~hӠJ9DgW,<XR +T\yRjC0`Ҷ10Ne+=:?~ +ɚӺ~&1/dnh]п_S{?B#jt2NRon*d:^f o^Z6/g60N缻TVm6AcviBg ?~{jҹu be^WȲNnNցW2_#0kh}~4M?co|(O66?h8:iyC5€[ZdG>BfMጷyZrP,!a$:%MyDї􎤐^r_* I1i\- 9RR:Kn-z{]u[4dB<]F X (fBOjFID=ȴїSv4on4֋ Tg~%HiyV귷6[:N'H!4%$ }|߼*_0Eܭ,f1]n?aa4Bv䏠kBMgv;!\:"lϟ`>o+~9c9WaxL#+ X_2tD{-dY1+zT $X8^',&!GHcպ'[DITl'zor̀ yy(ELNL0YpMKrlMԄ,n'5xqBj-Ͽ$YU,/ƤhyK?u+B&/y^. ϟuv J>$&Nn ^<%@yS7T48NӇQf_#3djp7c\'&[yG$éRkzt<)!&A>YNti?ٙL^ғ-#mb%=*fڂjH "Lj/;=HUg_Y gH~Y#']lɔpf~-~_l8IܨUs;b C4Z'&. X \+$X#:(\OG)/ j_%H[lR^mvY ظ0-Y# ow4HqLsv>.҃2cR:c ZLD4.0>l[>/fm2"UBC<)# -X+d #U!sL|㱥/\We#f;Mp)J69+Eku]nb=Vsϣ,<{ wn Ɨ&\ aF_>ṱBR8V}?&x؎|4`V 1frpթ&S˯BtmDsqLyd{k"Җc&{6Y$)c]'$iyL?cn5Oģt,m/ʙ~vZǺs>Jq_*8)Niݑh,sKK/(y =y+奓~5֢וZ4o)b& @gGo ze9W2lQuVvϻ\&>l1!)^%t]jm'KcZ^) *38l^9-qZ? 9otO2R_;_|:4A ;6ފEvd7CȴmdBű)YMU[m<" =ЗJ}97T?shU<8}*;?,`Z3-dto/+XVnkkYDp%Z_<sgp3W'NȮ]Ҵ3p|2FNugIīџrr|MZ*kV ]8FNcD:D]_I<&uId)"OL(Aۓ1,⎲V[Gow*mo~Hq-$uI_.ռM &LJBO9k3 kٝ଺*F,9oaZPXfFpE%T+ TfoQ haڟʑ\BɠSkIN-1/>[ȫ^г,Q@[G B&txgh?Jj!+*$72[,]~1\E܁Z+"IhX"po{{@K@^8RABi{hOȨ;uoj͒'A*H_h6OZF`%g]/9Rt'w25K%|em3$V!zS:5ѐ@,nB#q~՟Lǟt,T2l0`P&x=Mq|q4O?/`wD]B퇟/EU]'ŝO~ۇ/.[Ei ~>sf235a`>O>>iпO ]Y͛l'R#dM*_&/ l@WNz*DduE}V&$o_34G%JMM [T1Kmg` @~ sڻ4Oz ]+e87n uٛowi_fya/>LȇwGɌ}G3~?a1](!89āH+,Ό(d 8'«l:l~y_|쯕 e&hcT 嗟ݼ SB2Izdwׯo7`jIkHe묝 \[;$P lruV_Do4ѭꊧgg6 еZ#$IPasn` r)#2[`iG2 A͉PE;jߖWR@`wuT]U)z4$Lj8ߊ(V~oZSAjX!W`3G 5NOTǙ`rJ3^jT[y "H#w!U=u!ƘUƹ`:%N=G$7*5]t]=Cx\*g'U<B{[ޥD@e_7>8$3$+h38*M_|^*- t=%kroTR[ ޡF@ WL9iX0ޯ;>iy@mڳ$Рc86zCKцaWTw+8Kmsk/inNuZBt_$]qN;D,3]z'ިP/:9ԡ㭵=OL E) gwpԈ(7큱<' KιR%o}7`8v>s{so mPF!Id\H \Hƒච 3雏,j8L FHbS^T*7@LC`gRP.IP[+ ^$X>KD.F6Z3J;@0Fnx=+Հ]^;`.p< X,S볰Vr qsMVٜXEVde_qG5̯p"xBԆxx3J!7GXMʼhS RhXd6ɸJZ phpneCJvQjEg hd 39udt<'@{`잋/uW"LWuu6_pVs3े'yFiܾ>N5'a4.PqB 2JZ1N)rT9+ fU\JXx''R,"O\>w(0q2?W`yS.M04+_i?1T ~.kU }{:-lnm:[4;wB"IRfONتަ+{#e;p$Y?fdVb(?"2|,fƛ,9K9.%&;$4EyiW>&", >Ocl;f~Y~,ZIdrq0 &?8i_W:k*. s)UPekAђ 8Ry JӖ9ƫe’ԩ&D)%%J7y|oUqsl+&A&8T?1>5c]Q1Oo;З_G U { *Vp.cR]U K#xCm}YkB 8X?̱GW* A{`g m~xadyh2vq=6h眴Xd9#w 12Ccƒ]e J G 岸:*Ep1ܑ>YfKKBrgf p@?FxJqӔ;)G:U[uJKܱԽ<<7QEh5aޓH+/²,zE`b:VZJЬ oe/,a79uVO?\ދe.B`򒗠5SjvXsT͛5Dw`$8̀:8ήlR={`f> EG7p,'%QdiK1RUй-[JG3W:KAVzErn} rHjcb]G cI+Ou>t#ˎͿ9}w6M:g]dz?y70Dk)?jn0Stp~W>d؇Rѷkr#E *-8޺˦3"ObzM$XAʂ"5F+,sJʨw-0Xo֠2oh%>ՁRk FiJOl#n:v3 b7N(oJr8XzCri0hQ6vY b*SNVuU:S*sEqFyM( e]sNA9e'B|QcgN.ޤAx⃈ej~ֈ0xd Eeg~Uj_>e/5 ?6>Tca c.f U`fz7_!NT3<,7 @F7㪩) ӇaS$`B#4$Ncb}`ٰa0[_'4 0NN 0 #ùcs:)|gK̿o~+ĖY}:QL|c*l4Xl0(`c o,bß˯?J^vKzzϿ:xQOX[$0Ϳ)[|s\|sy}Q:ɰh?E;mo6s&&y>i6ڧ(]~|ث/-h0^v$;E,ZϿmc(xlb{UsJ0@'E` ?Wv~8DhAxP!D: 1gBA%Kw|Ʊz)HY/EЪJ6TIjxӞG{ྌG(љIB-|[ dSY4p,O6-MP ֥;ևK~ų&wf|l d[`lvEvee-~LnPR'c DvkFƾ+vR+Gtk(pq?Gq:̗r}nLJSD׿;_һ*Cl2^{ٝ8bnLT;* lhZKJR $9DZ\rI @EyOf=}\DF+0htLE^&Wi3R3cy-_!>OP GZ\+wMXI#%%pxPC EY^֢E~?}+?:"5v2J JqԂ9C{.tQJHa0kBHuV[B 5Ч}>k4ӹJԄ. h\ ScHUZRʍ2xWS@M{VZvʔr^][rFwtTfh묀4E {U ]SAM[G%2ل4..`Mye;!YY1H>I`8nYnϕI* k[ wK1~ngM M?1XO7  hkF19HK*FUƄ*j‰/4QbNqʡERȜ1OV Cf-;D{EKĘ"TƋKop]V5-ț[z6giA e N b8.1I7#ZRa%wIESыO] dX', wIfKk B~w˸n_wj\`^Ҧ>@"^̗;u2WT"ѬͰt27m/UM6{J׏]@H3w,F^{VV 9U뺕(Qu}VEPV1}C)27n=g]H迟i` ^D"7^D뷩/]A?"{?~d HL:vƱbo)1Jt`kцĊ$1JR,e"Q2eU&}eI˿/YU V yj>*O/9gL#Z,i.^>_AZc0p߅\]xw GuS}5`Mq;,3 A6E&Qi,`YGCqGX0Y avt7sUvȓ\~nvK*U1UK Z'@|xi:VBY,"UX] 0IgSi )v"Df$M:4׻F4z=hٙ79CEpI{{-ҧ({pCt%MIQ[ q<lT dKZ;i]Q^-']s;fsCV݋,zpWM]+^ pc^в@fBqn!%JSƱ4iu]kM)?݂Y;hZWec|FyjMtbMR V .:-Ϭsc^K hG9B蕽]GWBS3V7h' htO!TS|TcS@JM421My9K 4vA9K%PP5 R։ D8E1-LqXwk }Rbkߧ:Sإu ӴQ êSКVz am3GePHjkw2~ h3:; K&C̦4F+&(V65L]#`M㹙. 5A{0o8t^#S Ɣ~Vm^EX~Xt(,FӬD~PqzNf+tg '-v~XMӞM Ɨ`ZlJ*Ǡ0$v:, "u߶ɤ 4oP<É7ST7mX8U w&7)(tL[$&Z$֙i^ŖXF mMѦ3%Hn;-=_Gӣ~%8d_@"I̚=Ԟf[eGI*H1 pV\@gl?O`Gq9OQ \=A}B ]rP# A:j9~ʪToeZܢ>co @ h\&5ƴ'-?v)$s J4U4{֥"n;ЋϨ>sAf\w˔i1vYz.jt4{ p,MDf.ӝ퍖-ϛ 6)7Յ[{nݷ@̽Z#m@cU74-Z'ɭkbd~.@@{idk%0){sϔo!en7?ZBHK|4|lg ) e{521ȥIVoՒΤ6vHjJ8T"cf#? ~??DY 93#x`Op0>5B{ /~F?|Oh IW{W ƣwgSeElf6":p*`)3:W E4ɢÙ_|wd<7vƫ/d cN }-L=ku͹ȉ$׌O8n 6~agf4 ̉ V?.x#hN@A}zt27!E ASbUzZh-j;h nuJ[WJ3WX1g>!~~ F&}{;[/u,Z=lP5jpTct͚Z~ͽ4w=Y{|-F-x1_F U;u 8(kٯm)~4mkƱ!㹤z`:epiDʮjh֍k%:\G)N  >_*_Lї?Y~%ǝr%3k(9&Sg$H>U`Cj[%$-n~qj D POƢ6P0&){VҳC**KTTlP19e}7p4n4o=kAYHh\_ܠ)tL>w*3a]aM5?E괌^mHR/*%si'.,[# b`S ZĻ~54^76(#@ؙ7Otս3f{ÆU!Jh)NߌN-S?"KH`^WEVNZeǬ}ˏYHYfK\ףR)AXcPw224:Qb?m,(`m)ѝ ggA]wb'є!Q(| ;-%x"? 3RYa i&!{[mcAI#zMK<\@p-[U W !} 2-ӨR\ h3`AcN:aMYС:!U *SFOǛxx8CA D$!G(VSM*bg$vf5oZ(9U%2e# t\xY!$=Rq>8=miT,ѱLMU[>ۧDPnSE76A -lbeXSʤXxR,IdHyBPzݫ}d*BK]Y.Av šr.y ֝ R$I C1IҘhPzmaeM&6@e8ؠڽrT $ݪ; }Wa>IQpSWWϹA+ͥ7T:[:R>;WJ>wovކ5%'"RW"Vh ԎXp<8 Χ׎JxʛeY*5y"_5+L\`Kv,H>`vk!%[KǤoGȈ/F?MsH=㓑(#ƛH#Vo8:=+WK%hftËѫt2EuTd9gxI _ ula3|N%Mבi>!YMnH>8;~9=&8Gq1I8D8:>ߤ i4ɣݥ)-\}鬇>3Њl[7\I|꾩kqtLz|@O/+?tZR\۾GV.#4S3Vybw䬉[޾ !p׾Rt.!`l @]L됍R_*LjC"pŇi9j#$sPr5bȝOe0l~)<]4ۏw<}p0Rd4sgO>]rDȧeƢnwJhr5~cè(xY=]n |/xN0 HN|\Fd9)/i8 qbA0ޚc%D8ZB!B[/'ϞXEdߔEv+jJm*$kQs-${g6RǪ"~Iqz]SM'Ł i*c9{KC]C-zbҙH%Cbh #yV'b\V:oN[T*Xr?5}a6.݃K }^ǎjA1zxѴH=+ -;Հ ;濚Uy`U2 k|ea:[tsU.OiiCFdzenM\PbsyڏbXb>E3h$֢Ы ֺ6l R} 9<}ŨNuy.*Ժ5l nBX*0@,a!}TCko% #ކc0VHbib1PC#:ȅP3֩)6f\ qed ;?3 Ms ;K1'$ &%V5|3C[=Pv``ݧ~]7Qqͷ2ThKD~wWA|-/JpYШb8Gb.}%7QLnd4iҡ3VE$:5O[kvxJ aɰI`ꎩnm NCEi5lH %$ZvLHU4 ZBIaZBtoIPY{.oYՀ 苎 ъ5Wӎ%f𯖊@Ru3ohguldk`f*tm=xrT-6Hƚaky{Z^o,r o(R}tGʏҡkwȴً Iݗ?1%*Ns?,i%ܯ L'M;X9k#5<%8jMx2ΨĤ #>sDŽuӱQ߅G$Gbu/HQZF2Ś {q |۽~~}5]<.|vk2ݽĝ/Ҽ];9}:4D 62ā٨ ,Kovr+m Ѝmv׭"'GjXFpaKړlP[wOa[5Զ U@ҡ%/@[ nz)b9ŧvvfD& *3ӡV*<9?@>22cmg\r 80&odDb> #oPYLtN QP/gD0 !J)igDL1ۖYH/ZВxfPyk B7L 9X w^w{&Qr:)JVpA@4֬B$Zsnau 7ʄ>q؈[J pUb%VpSZ=hGDPJU5g *[ߴ^ 6+@3l,_0]8]kFo)Iԑ5_]oU~3s"Rqs$ >(qQ阜+LIFxk9ô2$s+ eudNm^/ʊY,*Q&2k%em$[e#zdkLoouחVMjʥUS.UbCL+q# FF,, Y7N;HlCZ\ QV FQZTT6aLiC%DF K6 t )~V# k"#m8%ZB֞;I[~Yg<ҵ^c/ ޥ훹{\&'Ol`VɘgYIW{Z PZ(gƧ0-~9"g*1şK7gF(D,6zIBkJ=8)[2耸.YqL0$BYdsvR6 󮚒wՔ2P5Nh6LK52(ϥ]LIi)*3 ˅!׏yNO}АБCY\隩X3NAK}NT>/~A]mZFDKB.rK5q"LCdЕ0ފr//b~~1?|o$r!{L$>v'y8-_,_N|1# yIn$~Ї o 9+^lR@[_nl=n0E@tX QYЃƶʋ{{Sri{5R+UBsV@%Ke$$b'}#jn1dBU"EB2b['"!`ό@q<2R*nz`y+{ABS4svƳ0G+;8+rfU͕8֪,oo+{Ȏ؎y<dG:O[IȟȮ"͚ ;bi!nQhDFuUdV!vٺu.OʅGVʧkh{פG tm V2rٵTPrF%zM[;> 2Q6[T2z|4Z7!;TFQpȣ٦3c"s&PQCL?MDtE Z'C,n|,ws~z.~ѷR9 mn>IH#*5z(Rr}0r[VK/48bc>ZX dc1OzKl jQPdGQ0&SEn Ys;RNS<6OܭUIzs裿(+v?zDfBMB<O9rOFq4=9˧O}Z|_jӊ<% N3g+Tz⠑ء$eDi\H`5#ٿbuiHiwva 0; ';ȯCEeZTDXu/YXuNگKGٷK|]F|yXs9B}1:~[ޜN fdvۡ||T'GgeWF!>tW9Ԫ7]XF.hIz#A{sA\9:??XtD拉6â[3ͤ~芄Ì~X)6:y>qΦ"QLzN)|wnLB,2ZBˋh*;{󭵃rJBR:=ӡ87 :U퐐KJ=aj kJ uF7G'7t6TPa[!T*m$) Tihl#!9ș9EGĶh |{m Pܓ5FQ\!8i 3Ȭ[l[ UJJ떼%*n-F-XJ+#M]|Vع`[6%PwVwŪ>{)Upk,Ew1C e/dY ~CHGGw"ij͛^= ^ةo1ORy}~/ꅸZ\knH?\_{0|^le<_v"ٝcW aT\@cX[.^Zx8٫J??=E }эͼġ#.y#ƻ 5X{6Q^vszII07lHV''!{rf$nS)7 Cᢧ좵4Y:3 _pBՆђۚ3a" 4gW+!|;aZK>Uɻ苎EDWe3e‘j1\u<.L7L7xH|xv>wg{ZCA'paگpWaF.go7lKcQ%{gl`qqP袉#Jp/JiՠDu3X=m<=%~kV,=cA}'K 8z!rBdpTs w"Ak }qplޠ 9. uC2@M*5Al,7{M!o".;Ce=<r .V'|z`~j$ JU>??9_6I&beS.Abg/zu|vVgw1g5Tjl4DwƐ{4j kQ c GA޽kG )ڷv/̞tv9WqW-}k(@ym*;wmxh"_7z&5`n`ЌȣYGA޽bŘfՅTRLTUϭ Hmm#^r p8&1l𲦆Tb hJԏHnv[լܕX*^iX֦.'SӨڑDS.;ֵ UY0H;Y>mOJ.QCPNxkRKDڅFSs6RRG2Ɏ\cꉃNpLs6'RgϱSlɴ{KֵlR.}OiFS}"'m^gS# YV-l޴ZkDK7eܖ#Ɇ 6pXLHLVw W>@_ QH LmFշ;'EdiJƎv鱦kI3 @SXEDyHjecouy[.\}׬n%3lc-,4 tʜ^"u`0` xzM4 8C) P8d}1mҟ}qͱ2ګ5K V_m-R` Z@?gd뢻ڔ}m)Z+V[f϶Jॽ,4W]>yҭp924KTfQs<IItjW'w|kS?B*nVFpa1z$lâP`WNK`B60=[)L1PePpb1+ D>+5)ES0 0+%ҘS`xX"; 7+a-Vp[AxKP%2}n8STZw&obTP+8ˀG25m|!=0^QCm~w*; gFr)ǺS7FY5E+`@cKOD0@ %Dу{LELB '?%̠RDJ2fXxP1CB{ E*q4R->5@QWo `g)bA(q :> s0G5[|jo#ʌb]Q+B`cBc|H pSKD$095݃W S4%o$[.頴wG}K&?zyqf]E;ݛWGgWb.熯Ox+J#6Ɖ/ +L'"q(*|a]"_@7@NmyW(mПf04GB"?Ie @;ͷVoHc[`Fg4Ώ; O$SX/Ź!*gֹHʿM)eST c!<(ETMh8eQD@rUW i8j3PSi5nvi^*4/SZ!JDSCݣ+j@Lࡉz-8Ɇ"%v4!Xņ8a 6c,35Nhi F`4, :4m'.GJ>g:` Qk@&6ɾ{+n1椼;q9R\f=9%pOY/CWT2A@Z2I<6Y购n,.1/!;ܟ7$ @b@'XjK#pPҐt4!cz1{ε)+޶]%VǭbKrA|ɬdXo1/z?^ly8{wF?'8g(^ԣt|p^y1z#g#3Bj?>Ue:"쾞uaʿԳX9`P? nkZt~٬ևK^e,Jwk_7 ]=<ٝn,O5852gz$`TZU!Y]ZRm%ݻ^}#. L/ȩ@]5_} 9حIu#eGY%{vljz4l2BS =;G&cP]7nh|34`79 te[Dun9k;~JΜo/׷ T4(+FRƲ(kJ!R5_K1)qDp( VSK2FZҟ3tvwYqNe1S**qTؚ!$#-[ϡn"hٚ>z]m[D.e qR2[y1ꔤ@]lm2f":Nm3][sr+*?g7O>'yISIr*!%٤m4V,Ԓԑȑ)ܝk41g[=",a<7e 7F gtky&ZiLbzRV7ITб1.DƿL1n&vc\/t 24—RĦE@ng @8eƵZd޾FufKE19'Nɇ^<g:Lh$EKvnp!Ss6]XTC"΋uW4˚R)eʍt',Pz l̽Hv!.zW]Voli,/| A sk{x;^j01liNiq/oT)l8)9͵%x'l3O-<v'ex HL*xLhnN3<@/nDZ"8:kcG7}t}!ΦCqcԤxw̍ޣj[ fvh`PEAh#6o@|E8z7 VhtkƗl:ל[fʨ8y+[ݜ}c\}Φ1 وb-pv+qY$Ye}qQVgFzdiMj؁ ޤ2 tŻPTZZh޾ٟ5kG6%#o#& ylCUVzңO,R:*~v\0H4 Vh7c"o1ttjmOe,7FܗWҮ'hh7D-;gf߮Ɵm]؛}rf#n'|mEv~qy?~?=5;76_^o?s/χݥLch?\['CseKs8~w/-UsE3<9䵏ߚW[%pqe~3 zY8oۇ[-g?$zdaydODt g2 9sOFv'T-xs&k@05 At;.Gx} >n8\\sރͣVA.y>TG?+%;0 VfSwi 铍D./ZRA6yZo_W݌[~dMςߢOӲa*G,j$Od٭ " ) -(K/)>65_Zn#*QJ>BvR)]*5x(9%Q"8yw!hDd&ž3GvʝZ*4tɣ͖8z_0e2~Dk'&=Y;.;3[M!M <ʿ}|ڄ\U+l$9٣TT6i8qSgyZd+=bIprûھ%"@dz9Nd; " .]mX79PxX](fyi8p ӔɱӍdN/y>a:l(c%yhF3ǐ|7.Is<#6+իK^69`fzu%ȹ{j ċ*pD\ߞgϺ?~6ɪGGͲ)mer^! a`l! '5TX$dOl|)\$㼉|/2FHyl#s`tX|Η'='wHqĽ֡ǖ)$d xJEe$>{ 7JÓJZk^8(t ѣ!)0Y} {Cb+3=. u /Sd u0)+Ml5R]93J5{IR%md4qϡM0|8VG< ?ne#RzK3YMy?xGxb/-ej˻qFgT'C:pF\ r枬^d8QުNNnl"vC['2=aI]z'gefnּa>U-H:޻bvI!Y3n<:%f)) ncwqޏ 9,OcDq=5 :Rr 1< SxڜRŴgio8d4H=)II=Cܒ̄%s%sH`_BdJf./U웞C1}(art^0~gU3؂"a&/.gzq޾~@Br1MF瓏HG;r2L@fDAk0~{Sm3IPZ?{,nP&OM<HJY,%i ?h>9֣T%v{V fFg=ݨӠKJ =C=ٶnZ/cUS5;3Iר?a}Z Ƅ5V۵SymsUbũzz'/Sj'jɴf[޵q$"=U9ݼA_mŶdK?ՔD!5䐲D5lNW|u*t*G[r5D WHB-!f;dX*&[d6)(z΀s3ǮcA#&3Tz=RKc|+ߵfgo(%N5;fѮS mԆ֗17enе Ṕ\b>~qeMÅƇ 3:$I6Ҩ\ֹL26BT&֬>0l=Nb\&›)=0ZeOǘ:܇)j$&֘j=gߘmxRzBZ]oWRcM32_."BɛBQPk% Rbtx#C6R`9k2%`rpn7"13CgM-v~ozݢn0| { , gk2o~HY{t'ӲܞkLzh { 220vz\~r]w73GC٨: ;fJӑUMC_1p?LCs Ge #[5l4 VF$RB/II?৽gط|v h*H5f],M04b?7{.Riŏ*jfxVـ"ck\ 9`KrwH=dj3Kn0YAK+Y[Vɸ%V)|u+jxE ֚ZJbTX*8jZ:((ߊZQspOq>PQ~~ZWu )8_rSՍ;|7>ۧ8֭}{}2]F% +RWhNl"!k wzu={vC6jRM2_r4v `-ERꉵOuaႌ{4l6'Jf:OPkYjhX E[[mH-fQoT Bi`cBAsQ&֯e0&9zxOAP[GR# B AN(D ] etB4;`͵R*"%#Ueu X(Jo=hAȁ=5 '䄈1u{͗ RBz|R'@md BXzB/h\ j<Y-q_ML~, jhA!|[MJN~dA68=KGrHmИr@ǔ/9r_ Ng9=ӃzC0_BҴk %6Gszl>n#9=66['Bb%G#BÁh<%s2\&D;ywlhTOuEy^7K0b3ƨR BuZ/dZ~ 2S첷H j% kN ]Gne% Zom̶;빌\&{kX&:ah [/9E)=Qrˀm\9 Ӏemڃka8毝Q2UXtU}A- =6iZ.IZ$C EEBTo/C\Z1ds$=e/_%<'Rj@BynCDKx*_y0 2UcڬHBf>4aj׀^Ħ=ON*'fJ XQJtV6ծvM{"e.c(a`΁jT!|#h^fVYUΤ_E,>:, ˠtG@cpUc=$pd2s,Ne(jRDNT*C-ӌTtVB"0ieVeP'Pz‰jBTnbFuw˛.oLD#ZVH #)Kk(IdDXLJB>f)b0(W2kF":VGHEDC18o'^߉?:4X]wKLucYWK TVBdoRq - Y@J JhJ|)28`)f+K!DFKhI8(eBOd_t֎lVdAiD.fL%+ {>-1 R>D,OvhEЕLE#rG>'lr fܐu!dΔYDR ibl4$i\4_&Rʣ"3^(۬R}| CNϦ43d,"E~q tj3 w('?IQXl 3^SbVblQ6bpBQAddc#ܙgaU{:ęՔO%D%'VО ;)¢: hYFu9ӌ q0,[4GV2B8WHTl)9_ЕbK,TN&'E;MǸGꭷvRPf}ZR*b[eVљ "/&$aJ ׎DP> Tۤ*gm^SY5dqCg ce%F+u (WQm")G(l*(FK2% ll2ƠsѤs~ٜPϓl!28JI,SHgCr{!g2Zc`lQ\3EBee q*f>1e-uU lBԝ1yIb c(iXc Xւ\]ɰv,em”]jϐ"$0^XL)"KVub)%pUFI4jCk|e ʁP:hs{ca1uS\k@.km_0]X-Q `e!U7jdX8~M.%wň(1Z_p߂a -% ?7^i¡jqAЭ- s6ҌԜeP-BXSSsj]%rT7_s4ehJ!ض aA=YWl(N(eܵb=Na-8DFEb 6,*h[YquO]3 g׷͚H\'HSRi7iCf'Y f炒{2`w4󩘝x>!@BӠ#,Nox*Px5ZR/PTKH %R_0ACn ח;g?j2_sOC(ki=f-ՀVQ{\,6MuD{pO?16_~ }l^NH%t"}Uy˄˥WERӐ\/4Liʳ9n|Ù߲h*HHuY'!DS(d~~AoI5^wΤC~M YC+"R)˘}`.ހcA%2>G_&{̔^MXlvR` }J΀7 `1+{{?Ԟ ^v0%baAwIb#cT|MP(D ȕ$hVD>c"*r ]4yIJ[X@ oLZRfYbD?tM;3=I`NEȯaCnQ_XQAL!~|YhY?kU=iZsw 5~A%^t*JATЬ\cz!B*sviZeVA/ZenC;gmOŹׁJ7}UA:6}^UVYM9Wsul3yįuE摬)f^<)ịfm>#FUS히jGOrMsl?*UPֶJ Z=wՎQ>&ܞjPthm›jQ5̺|Rs/e|ǀ^Ygw* R>W=$Vj}Wg_pv=_aQjp%?CE٪UX|8<7SoM(=] # ?'ŵH]\&͜ToF|Ô3t+W07]`~`ϱz`$KFˏ_jGWyWN.on_zT*7]p#W3=]#90 9SmP4Pt*p]c,n5~-!V81"ݚS-_5J" ``B: ˂C$8SfvVFcAn ?no'M^fqL.P~L46g7nt Q4쒛2U?{W۶A_]i(_@Ĉx.]jdY'I}Eg,J"Jԛ#EHwg~33aôbMI)fRRZW?&OЍ/. .pep8]wxcMDsTE&bRQCHhE.F(nޯQ67wWkSx7z; %՝J-[ (Ϡց/yƪVi<̘=ڼhlF ٘7osgDnT+ ˍJS??vquz} ?vݯO19AN 9ȧdIZ4'wb#K_޲۲qJ@*\p ǖµށ[ 4DLv _4HIx'Zj}ʁ X19(URԴ%Tg,<`@wy} PKxp]-U+*H氨9,*SENOL(~tv'k=5ؽyI3#D eI5E_S}Ly'i1%ԝiF,1ouBÔkgDlNN;/As;>,jl51]!eR%G\!{ՏhϠ'DniK/l}QKwOOtx -ۯa׀B,! I&( $l= Y b& .I)=5sFؓ'bܬմgsCBaylc?‚pؘ8ŌajHJѺPf_[6|Y21JR]0N"`v:\ 7#<%RAŞcpg@%Č[p'sb%Ԁ^` ծNp;v>uC:9dhB7qzKFtkFmVMʕL^tQ:;DU:+rzXz H yavqc>eQo ozf4gUCκ><&X2&e AR]Ǜ̹TNL 7_*;%?(68ރp͸1(Q[D* plԠ-0e4"QTbvKE'|a_euWDKxp]-Ut?4+`7"$hjYRD.&5?x_8Ed~V#5MgtP<3 Œ3B‚5 O=>CDL@jfGc*9\A S8pUc~cRqn0_ND)f5<>3OɯM  J%2@٨;_q;_DIY'ԜuBY'Ԝu5g Sԉ8u,R^[kwppR\*uH@M "[jiG@iqb]]05阩dNMFK3hu#M9R}crRn: <8d5Ghg?H i9 \ap$0D7~J㧔7Jd^eG)TebB@-)Ϧuଂ*(#:s)+%:x-;֜(\aM_:C2.`ZʅɁmJyZTƄoL(Цw,y8re;@Xi;յy Xg1V|l[٫_MDPRP^*XR6cHLo¤f15haUa6_qT}*`= Cl3>{OA+>ܷ}Ƒ5gv83}Ad7shlF&r!ih=үA".TwS P$HOf- |ZH}|oR *׎g^ᔿYik.ð8oAʁq{xnZan2X$0.KZ[sݗ4T6сXQ b~@C*XQj3MLFVH1'Ql≒ `hJO7⳧SڍͼہIaov05v#~[X0+8A@*z~nUf\;0qeˍZ7A0 >;;C+x,$#C ;x403lU;f\8̰ sÁֺ&ZClz>Ljc4hjjGkS2hxcs#iQ7MgC|#c:'V_BD 2a猎c`c!I4!tǯDuX@5^_jIys}ne/ =a#o֕ƣ׆  !勛-#[o}20;LL.Z{Ё@wMyשٮ6&%ѭ| m"B*7ZBZ|j2zllԱFFiHsY8kcS`具1Or:?k16*nίͿP@/ : !/Y8*l.Rc>a0<ia%`*_. W"2p+S2IB}s1vX!0  I&F!,)b56I1Kcou[ZL =IL&>ٞ/4.W&uϺVRt|ވwBjjXaɂ[ؚp`Ӿy U)>)H "qZQ:$X\-]wZu4d6KNZ][o9+_9~ $8$ɜ͋Kr,9`um],ݲlk/Wd ]"HYKe7,j8ogጓDZ @Z$V7_;ɬvKg添]Q{{XWx2mhYW 7菧ǟa?R,^1ꬼdߏ?0G}3)Bv1X1a~dQL#C}{4j9(@cgGT^Pl0J$ (axzTsU-$(9z$UF W1: ý 1,JXҋE+6 eBccpUX/kΔBI0j3BHi 1̨F*^J%F8im@JKL8|Yp&]eIlX8@8@kvʞXe,,%2TP0tIA"$KcVPV(܌El v֊l4Yb5s9d(jlimRS` ľE u0;M1FcȗuQyr^&YE :r.#LM V bmBY &*f"3V,kbeZbJv ښxj%tÓ>RжɖU4F/$yW:#כb̓f{=\NfռW_Nm=wOO|$ȧ/+N>_DDpQeyb_~~FYWX.xܿ9>TJUPve%iO;JL,sHH*+s'vU])\v0ׂQI{!PM 0/A i(>hp 9}j:q77Gcdgĉ͛/!]O?v*uoXՂ ,@!0"r\9ĘY:4;p$zĵMLH#xbLNAԳXї-%cEɌ>e%m4bTƐxt9iK,pSG"JaoV`F .힋 /e/l؀ojo0fyTܖ<WTE:XhVEIhuh (0H%O:]{]+g?'] ѳf(__[g{%vSJAF6O%5R\ t20&ia<24+Pk?J >;4+ Bƞ#[ $A9(b):DuSU .@h-t1h +cU }64X '3QTEyb=FYL;DPKyіzZSS*ǭ4*EO@]sv*QTH)1TwzR{HbbZ4)yH"'4V T yj.Eu/F[bu/F[U.NM!X9X;B eF3I P9#Nř,.ܗ7 .6F-Ѝ$욄n QkHϡMSDab\lcـA3 {WLs4q$Q) w2gN]MQe4睹P" DimecAKSJgWYQu\i :bYR*ra<ސjeYnMw@S2R 7X0p 1LUlH},%fQ0oi[CI<ʅ2g QD![<׉&bH n,GѰ`WNƕ ]_ HL#@WZ5X"'ͣȪA#u'dDR0H Y .Dfp.jn; Һh˂q+qI]J֋Gh#BD|\؇]@6jY2?o.Ž1u42bS+g.ioPւs)`CCߌƓTM:'2ًCjq͜(-qN^t>Q-^TCf áq5(aO#jb Kd\iɢ;MCQM iWOjϖYor Uf'Orq{D'O Ai{'mXX[}?N}ձwb|5t]r܏Fš+{?h5ID4D:T\!u5-])X[!ZeOxHE'A5*=X=(~\MFz@6a8?rOljVZ;"ѥT`$)Xɬw HI2l"L-#.;CϖWF$G۳Ր"^\fQuL*EEDJ܌HŔq?,1ː<.X ^t8zq"RƬOf"2UaJ`LdRH"Q eJdSDr_"#dY <FîI` `YƁ笋FE˙C~d-a)OC[;OD_qA֊ߙ,*neF$Utn; 8Üׂ}5Y xZւz@ɓ+؉Dĕ3yvMțC"k}FHBo!&k",u!)*ǜ:.0ŜqCv³wC(q`K2vb4#k'sӮCGor|"0#7 _}fv qœiG/D>}\|Y`F^k _Y- ]Eߏ's|y9wӟ=f<l}鼭/Uoɦx ] a%9~;Tֆb`t0ݻ)!Os3^GRБ͔qߗ9g uN#`܏Gvdge1^7#]`% m0x!a @o+D *2=,8F"a=Yo0!^«R kGAZPYObǕ,5W@5Њ+yɒ w1.y\uadmWEpФ 9 M +a&D'f2L){ | s(`3g9}< kfAyt~Y]Gg'翞]Eޜ  ?Oz/F͒,oÏpGq촷%lIN>_g'ӿr/]\O/Ntƴ싻!k%O.|8K` Wazid46ipo,=<d B|\wdqlaqlըfA}4^pgUrRrcN;cuWjҳ:~#\~`ufgwix }1'w%$l6.|}b&3J#XoN|{<~ ŤDEes?^~>vf> ]QsfWw4|9z?h,(.̄uǃUSkθ+}گہ~6&7ѧi~L0i^c/ix<~<M׾"sN~/ Ʉ+xb1z6>bzYjgѤZvXaDG}nY)97N;>7dYuDIj'>،z)N}^PpV<+݇_6<1& ׏yӧ[cX7|k,gy%a}oՋ~íh[&ja?$gdl F㫕aaş2ib9u 1&_Pv` O"ޑ\j sр%GU_z?xRE12|%R(xt(%2,` A$"$a( .7i6"g֟Os*uo^6u̿6q#j2(?Ez1q;J־{?'g\H*6D7^bM7Jb݊D#YV&)~\H4quxZ{p2E b̆60K##V޶Z>Y|n-F`cxc|W/o0Pm % ςw||n~e0Ź}7QDhXPuH!ntpw塚 s ${ql3WBzC:柺߆w_\_v'ڏ2k Z伧GYX U'?' q{ %H@*՝.)o/vlXfIQݒG^RApmx5;95bvo,ẁQ 8,`]z;B[4"_au{7J?nz#}tA{w(HUk!xܐ̭!p̏/S=t 2'.^F?;O˘@A0ŴE4$adTDXlLP  αf'*X,ɩZBX.ڞ'qi. a".uB(34!"\s&`s]s5T)`&Ily5h2,^m %6 ,!q%JsL83ahiNpDc6&R[!S6l%ݗaW1 &U5}2DrMX~NCaG46!\7!T#\BK$f6a]$DO-*T*W2 }N`jA S8]{s]k^yrP1US+ujR ~( ]<f֎ DqtAagJ҆]Ym WֹDyY] W(a5.B|sP}߷U}Sa3yfNN{@$_I|W,/Zq@`G'W/|~FI{֭tם@n-?EO㷮[i6x3; %HZF9ckq&T0ƈG 1:FQ'1fnY)v0Zԩ^UZ3xf;;8Ҵ^?׮4jOI SWt<R9\q R9Po|GDīK [A l[*d G'ظy=&o;Qt1gN~x6 /-l 'wx}P4y9yRCNebY$,wVy(?nۮnHn@_gڶvk[5[BUڭmֶv+l-Ч@޽+B36!:XDUƱ Tb!}Vb#vw yVgU>dX5'7QOjaلE z\Q#lZPm4R!!0J'*5e⎳yճlwI@US7HSry$V'cjv]pQ"iNkrܧtbf`uuVNa٤m :n$e{mm|(vYNcQ 8\Ee(ᬭEiC{ֶ:k96f#?`8".H.],Q,Iq${eFׯ_66Ir{qd@( c'~è*E*s(Mcq]U >?ϒ֮j֮*c( L+( HFtFd[9 eiUfHњVt"Y>Rֱa}ޘ7R\U[̄ Wjq֝5ugMoYYSUHõX#1VȪiPRAk"6 ! a+ZU}QVU_?{kv/bHCbOj5=lb(P&lB =bqEJ%=SOj|;3Qn?Q]wÍXog7߹Z&fܥ<_&6!"k=8i4ˇYɵ1MѻPLd޿^{(bjԻKOڏ'ғrOeU BHh{-.rUDh$]T8JC  RP|:ԡ+Uբmj=zPE95k#!6EQ&Gj2nc5>x9t ¡xT~r.] eo.CEy{y v!U%y|W#1V:!qp:08Mi[dPXc([%{+3ѧrs@@ڽEKp&2#FWp0*VǖCJm˽rT0^ ;3yQ|cK! ,#ZɯO/ DE NaRL}=D2 ;-LN dRA"HEOM1@^Mַcl<" Larvx0>Ƹ&- ׊>RLbֳvX@l}Kbcn@[ B1Iy ?x.:c4udi}VFm?e,K~*=8^v5,6b OfOn_-+"GZ~nx@q}HXjM g!OzNdzĀijD=~c6`a 7aL]w'V`I>cH-I&Y==հ'qɳ+&{7 iZV 2(>* Od' MZo"n}0dK밀E(Iύ,&Al7Zɠbȱ)Ñ]P:Y#PƆ}.Sj`z4aj LmV,?xSZTb&4B?NV+\,$Ol"rv* H\_JZ,jɜUB8TĞkxnT fBR%~ *=ό Z+FIbUle `UP{ҫV%.tF415u R]:{ Uq ?ӕo6gM?*B'ӵ}|{GL,J^a1gvZ۫~ v϶qJ#4g4Wܺf4ba~&7gyV[=lo/]>|aGNc,5'Uj#~BM&aI^~ExyO=(|PQٓ_c($] 1H=@İXt)?-e|m~jr)\|Yk䝺nrW8hJ.1>owǀvsֻ&ߞѻI> t=yDg^ ^x82Ct^P9[Wv_}Z;6:!{jk8nSƈ1 zɊ,ym’ ^Hhȋ|9gYbk)w~7B_.c $l®g>U;RzZrR BTsvHϦWG. |bhz1aIIŔTl: !QB]92̦зsgq;W%gB0҉lpB!|?vk?1JZ!z~ a(Gzؼޏy lvN y 0xug7ďߓ= ?[ w౏NLϝvwDYYķYOT¢M^.l۶Mɛ`PU"T6 ,EѵP$2f)2jp[`XOIY(Mz4(5 B GF2mdzTZhR6HG,xwBٳ8TWP!sslGEغ8ac9bYV#lN> G̓h%x=TU(`Eiuic%A_m cZ.16ZJ2RB @Ht2"KѢ%: Ȣ5XnĵTp`"K\C10,)k56j-rUYh/(JoS ֛p%%RM\Q.&`۸|F(+v "CV9FjH3/e#4J%Wё:Gʘݏ+.aiLh^kQ*I վj.]L^ĂeS)W`+PFYÝ+ ^[!Vtܷ59{X `Q,U zӊ[ `SaF04UmQ,BF, bY*6 j5!Ŧdi/f]FCD=a3i>o0jMdbJ][<̢6\UC U;Dwb%!Y!D%& x%:Y Ck p0f[@>\=羶*qIy4FPҕ;gʉcW($zp@ơWtI|ucLFl@{x6TRQ61Ƭ(P[ 815g)͹\w1,)fC嶥 or5D[nw>ޤ#rĘr[$p5C fFYV[#<:l7h Rw^,)&Jؑ7mDPVЈ"ϕ{.] Grl]QOGw/@旈6 ѣ .|}L (Zl6YRR rRZ.ض두>Zw~]K=E!7SMcӄlw+X-qZ->$c } mDL MnZcB)D W'mՋ_sx0b#+(wN-yZli3r\}UV)l qRH/N8&dSC[KE^m IpT!/)r(5Fe;`,3]cut-^5}+7u;o¥H/דקEP.t7[*/>ū U4Njw X%_g>-T.C/MuQ dR&Q! cuL!4%M0\xmD6rQ36ј5QF@R>8u U#l!)9*Q8erS%Nٰ\% w)x֢S,}TZL*KE 5:\r,/_E3MSlpo5i#J kKU@h{!:˶Z%P)]2܊F6ը46FYS/yo4Pmn ]GBNGQąu}ޔkKY Lڋ[/[i(Ng[ Ûl7IJ4ĉP=wA\346Qϔj[)ɥ>MaRb5YHy<]/qnxk@jd5CS֫vfƮeJ³ŪA5N *Q;Tjnw@۷TH4TGF -^d<*1t34^oXאɻZގu&W`eM/W- .eDsFV'1e[K1+zE4(}gؗ:㬧*oc1O yd6D}[:> m7%hmVuv@gm7VL kn&kc`7|ʀX ]4æD-qE! {d7 /wgVi\r@&8ooG-A 3g;1Qkڔ/| =%*۰ɳ4=Hܶ:r~'{h7ea9H _nOE&nʿsF=՟ok1_9q?#Pޚ|%C0~K'Jzl+.'uR^yɑ"ЎnXgy#il YRzKLyaȻ:1 W n\T5c~)i\^a7B)lFOfEh@eV+rE6h};_dР;~IcZ{8_!aQK~v v2c쇝̱DrHʎc&%ĦodTW:-9 {_TGb^HY760K*XIL5F}aj%0=#QD>~m2^Hl.q@ƌ'1 8vڋpFdq]ԓ'GVGGɗX"Ǫ{9X@M q47eT4|9 `.=*~Wwe4{@ v@Tso~SY_ utKso ~O{I**PgpM۴dthklY$Zat̍":2#)wHErRNn!9o׬Eا+Y~wz)exƀUNds:j&DCnpsv4!aOuDyVS=\zTDuܻ3⒰޹ʄ񩣂 S( zFL0!͝&gRr"$5iaQi^&V?H̝Kd.tp.xN:1 +-,rX:M FTi2:X`p) Bۭw95X] TENhoa!6U@r$Cw;O@z\ gC n" M`X+8n?]nD* &+ϨCH 3ʭdk׆j 8,Ĺ A HR0ob2 Ffh;~HgjC(cHal8GFp)#P?Fj-j_}H%ؒHB b@M1EA$V@;)? Xa(ު >y/;MsNfsR Fמ1 CN1/X!(s>QPL O7CfA< A{\XS!85+ȅ{^|5R;ίmb,6;y2Cz ܛ0 @Iz݋\`99+uw/_}!^.'ӫ`Bl"rZ 4 A D `18׃P$Z֩76 WH ؽ82֘Ք *@؀70ׂW.Hf *3|nїCg!s vc1!@i7ՕbI'*3A1iK])JYHr۬[-nFbkҿ<$r KSTHPFe_!ZF|rLFQtbLF[l>6^:皭]l#We6rz%`V i,B$ZՎHՎM ]f# aE+AA j[ ]%0$5DZ$MA[ӦK7{qxf΢A޾hMݷi!s6DB0sY*wdB.qEcP8|pZ nOCӘӡ_h[f0]&ށB3s qRS: QfnQC54J`9WJ*DۚbJDTPIF哉 9~ql:Y͊+\Q@ yB)D5,Gnz@꽟F Hd>u̾0fu){3%NSfhRS"_Iq산:1ȩ<[څ.XBr‡5;f a!:SdKo7DWT_G/#<47lFa1䒔>WkVk/3ꔟ̼in?_N6Qgn1EfE{Eu?wޥI~ .i$o\.!\#ܒ(G$שeJ% -&^)E^Pew}۶'ZĄ!0J>&2 ]Ich%}egU?{SlvȦ{Cw7,gՇL XvU&=w`t}7IC Һ? ~m2Zm*J^I UŒL$, oUtx*H'ҷ۳\X._~\ŢUDO [cJRY'L­VAJ'yWAN i|8;H>[{nromzWf Rlj>SGۧW~MrcL_-s{>ׄ+4hsYFC+]YO~q ڣIھELAadzIjXSn}6>Jw15a,G/ɱym@8)g9\Vv>iD=RfG)(0z]8y)H@E؀|tedhOw4n]̭_gb j\""A8+CT"_^^p_Q8r^< <]-tѶ̿)N#ȍY*olS)Ow?.G"w.#ķDg'o&Qk_=|& +-/:Itl +d-Nj.oOO~e_JO3XP\0&Q Fqٟ?7C2nSX[{gO?ӿudҟWf~ǿ?O޼_h^C,3>.1KlfV'0qP@}&8럿]gSA1H~X~@+s5yQ0x `d1eykSi&LхsG:sm1̥1g?-"w[n5rݏSZ=rMۮ+!]dZr:j"Akc(Q\nsn=k@%sqEi,sRK͒ezw$@գ[|X|<RHFـ6/bPp#P|>`yθra0y w{4F$3μefpϢOo-sk(qDY$$/#J6xe$ɭjhxgG %@dȘQpUy@:-v%d9l3e * Q{ F(eK|Yk{@$X, lexF߼w?i|>d:YϢGe) OIg*Qۄ je+H7:2 96\JA X)ՊʴZ)?`z kj{<֥ۍKn z}Һ`W'rJ;=#\Fy&ӓWjBT͟EȈf@XF _6G~'><RFwN:st=i 4 ^軳 Kb%*YYTB3+-I ZH 6S*(GK4vR`#ӖV,D_CoO=CRy"g `ٻ$r0`,pq60$a6i"iAnRRHMîG]WwWW<$!qh_n^j(0D(&4( D,Sk"^ȏbMbLg딊h9o$"&i4$a&7Mv]Hj/,r׫} PJB#PX$M) c*Ԕ$>M \Ah{Dt̶:|Nj/6&.8nE1D`3̾3աJn;[aѮI_~/|G$FJߥ5v+>_6P(Y1PWEs* O.ev}7]QXu楴fz|/ª>>1RZ5'ErO lM^v(]XIdndw;mJrǍ>Q1&-G!ݘ>;6>a EXlG?N܆}w4[xE3Liyl*%º4 ݨm^T ˛Ci^flF\ 3s ,1KgY:Հ S9JɧcB<*8b Ku+?)K2&T=τ*' N<}&Vð0IE_ϓpa>X?a}j7EޖI c¢)x F0DOcv`2ȇM|"*0~yyl4z;^'IkOϏPt9AN\;#F*u>5ʴUKV_n/tn" )vW !#ztG &RY}sQUT7J#NbF-o:\o;`64pC!h.'Aj\Bxshr.ϯ}hǿ jY!"wq!2kQS~!V@[vwo2{9@R!sppzyQ^&1/u*}9^zL^da/ހK*~8ml _0iA5B7#@\(qwj4z*<ϳ+l]dm?W7\A\* 6fTZ(#w{-)4/ $sߦ{[phV"ѱlWI 8cM`J^NL Jd7Til9i=ai5 dWSa?ۺ2@buae?OuO;0]S=J*{vKph7c1~V^3ie R.SDֱ<*ٙprE#©IZlH82"[D4g9MP͓E9XN.͋bԍK|KoD/+*~ǺY2&5Mhvo͛.N/+mSZg_尰,3CLKw'zw`f|94;.:w:0_~#p 2얖XK6*%3]7 N~$^F$(0[ʢrunKehȼ &<<;hms5 ! ^1i3EI:ą{sx.T'a-OSN̺QڪJo  Wm܏]]'-rv-#0,#Og53oK g5$猰08>H_my:ifW,*[>ښU0Ŝdzys1!d XT ƍA0L W벞IX ,z>&y*5̲ׄ'w ZY zv[ (uQz$V0M1pDv\ƾ}U7 XU/(*x΋? f=g"By0M~ତIc;#˳Kd5*J9eR$DLq<$/y('S*d\(͸pJa^,mJSQʣ8NqiHh4%9MCT8jk)No2rX#8쑙*p>š1"SM8 )Nb?[diJ #C`,\&\Ę XABB(%\4ȬNbH! C ZDBiJYX'Id"RFDjqX=vV12lofVKVPC4 bʼ֬oTlLzf_S-09fSS-1+jR*x5 z(4&_{B|umrM ɱۜvL8**jk+ PB$\ӕ;n) 03nl=0 S5H"Xq+JݭAw8Y$)H ZH D 19myPN ,<2H9b26R>Ć.Mbqqw.hXM6<$aӳj6'/Z`zi|Ac Xܯf9o e0S} |ݟBTQZWCVg:} Rm#vأ?98+M#QU'^[^lB4ҚA#]8E.Ψk" S'U\s gֽƼBjaYvvB#-8/~uK/yɀ{~̱R5~2 9C릛UwC`B#Md$*wل:FQ}3֎[6W2hQV !U~k` 1_Un5`;άk#<#P+ 6lVY[W*м$P6lbEnqoUnq"uôU> Z]UR-e{9cKU)^vVka/CіIn _DAdtቃ m>&QX.6Ruc-u>.jzu+wU *PKP]1QjM*ާƴ-p*n!S$aQK瞖=E*eƷ.]sl1[̛]%ΖϬ-mW r.nUVۧ̋ǽ^Fegf)Yʢ!%qL)F8IMy R˜$Tq19K,JݬV:NØP3XqEkX3 A0 n"͝fc .bh4k}5NŢQg3ȱvF)0԰1qn͗*ݖVV˻ \&v> l.77u'yXnB>vj^hLu˛sA1^dm?J~7Y iL>hyجcB2~m &a#g<4]Fd UޭϤ'=0^{(s3֌=)ZjWs2Mib`%Y _[im1R'L ! )P"1I@I ݝɭmPjr~dpG&^n"c_ e&gm`w78a8!#&ɞ ~_WczCƂMɿdm1/m~p̛Y /y/|zڮJ~Q<32ϴ=" 8'^6vr'rJ5}or5CXIt*7/\~1aEg(nr;CVW'६k,XEL>Ds.QBRz):Nݛ! .H|JSSIL8tt4 Y3U3Z*t32ZQi=zS7U0݋{s:e%ҔGads6adDHxUH:ydmµ-REqiN_kqK,j5+.ieֲd>7[=ζm:pj.H\vN + NE" IθNEHn^ҭ{7[/3f<MgR).> /rxɌ9fЂC85~>^„k?)kg(L7f/_U~ܭeQ;-imCS`!Ҥbcʽ~1KF`sNrh*CUW{XF*ҰJcŪ'o)-xWGU ۿѰ0jJLTnCN7m$'ޓdW,Cb$ F@ڲHs!}_DY"Y:"A|Wz6X\>,ٻc5+Nf4Bk7i?Ng_,8Oۥ5ki|hB ƒopN˪3r 458޴[pwPb$#^n1Bi :!V "?\}0ǟ{}qKZ;MiWw|Zs~Fó|pTuNS|+Ł`rNqKݜ"J;ˁaXrz.<+-?1'䊆̥ܥh8҈bْ&$Z iUҊ1֪-ifOK.!S/0?8ȁ#NC[tiB1Jhɺty)Qt1\JrM9i";U黎D<;wcXevx3 ,`Xtmgvq'zcRl78Lq$ˠF22Rlv9gL[xi69j /~M'}ٛf+dia ^,T>ni?Iƺ}0*]&" mمUBfb]1i)SM?UQgHȤ(a:jmߌܥ.Ȑl**^qbH|\>1Lݸeq?lq{.шX&,ۛ$e:~z<,Rw~͗K,MG<&~J5^\b_nw3X(ߚjb0,X$ 1U ,FnX!l#b~*5՛.5WY5(x!E˫3x7Iy]*05"D(A\kj9\MG3-gd'2 ?V³ $)PX9әqI,YJIb+ JYCyj&n6T|zI*nʇ1]& Z$@HL š Ԓγ;gk<E5uAV ӌaO\m}z6E+wt"AZ g R$%TY(5cԓ.h7r4۾hgU٪`Aߝr(O:bJ"}7~LӴ[|˚= G[|+d)5!יl+NA% I-{:5ۚ=@MTFE)u8MJ.8TX$F#g%2A(RRqfC"{3֘ЎORRxuі#F[4q{:,R]O Da.u, Dդg ǾZN/ g^̸U,bE$&n7JQ8YT"|HV)EJeJŴ[1ӝ==ժ5spk"κ)gE;*F$@a*7~ W(y [x,L('8<ڈ,Qp"SQ*#e&O&W"6bA4171P.a n##qDJ7i8ڑP SɧAHa%ܔ$I0M RpjA)5,\܀$4ŠixEA?1`&CNs{y'ТM>@IPkR@!A`,n7ywN=6)H_9 p{PB+RF563i\aMDbBK"SQžE>ę[s 8*M$QgRJW(RI6AN)U=3A>R|~1\%~~z<"Rrr{jWV~;[FV 0gQQ\|$=C>Z~hc/t!^@%L%{*IaTv8oYO9MѷA#%?ltM؏39P)˙CwH*ᆺ7('rVxD)f1[&v*EW ꁋ8pqkQT\CqGAgmM)"׬.n}h;Y- MsiiR1wFVNc5&>m:wSn$O N^إQ5Wa^'Z_{X0xQr,  ;#} ]zMۅ퍖 %s\j<@H h[-v-\Ed%t@_ף pQ|;sIMOCѺ)+>CHIRx cQN IErNh Cz!jui: 4|\rU[zU4youHry7$Q:|qm#pRn=<+d[q 5=b pՉkX/t8> հ*ml ^+ŰzĊRM; Y>̪'ـA쀎{y("@HH %:db4'̎˖CHu.ؔlN7VҖw ' ŶaK2 Q/̔t]xկvtv}^H9!ywe!)>iqjbgsi)'d58P,bb#Q"B*!6, Q)) mE gp)z GӸm 5Yfyb/hy_^]:^mJCy\=N {%㈄=4M8qE|u6Ԃy04 PF$bTN9b+h"u9)#Md+$J(,ҸqSҐLjhJr0i›1<}"?Vc~T%"*v>^gDV/k%C$Ғ6?qb`wIUN*"0-OIؔR)BbW\2>ȯ8}^]WM& Tz1Y2/W7㖈fˇC0,`6rC/n88pKg'V()#E-_ڳ@mUٟxj>?o/IybL)\uFv` U X8ިE>;g)s6[Z)_u[D&u%LGt  ddƙ4_j+[Ƿ g_2U*ג@L>;3 Aa %Ӳg뻪eSŋ#43Uڌ#i-w~omS^y6Q?Gs \_> imTǁ HUAm's3yC$I밎=R%QK.Rns;އƞJYEr,/ֈxSI~{ 3?yUxVPtl$˺8'ixF`o\_uWQ59bR}^9n (un:ھ]%bCGs&T{GI>:ZpNlLŀR3" tt[F6ˠ?ɮc,.5JZ͔lF*!SVSRfWZH> >VhU|m?e>iJ´!Lt~,qx (c T'DPRE_:nI"BkyXE]fgop/i&ӭyf67;Z30&SDV8\ YhZƮnh<֩=Aq~hFOsUNes7Y|iV(z?f,HQR32깥UX=xd=vz M&S'2@fG͸ևZ?5a@] OOPyTd_f&KX_ȓ/,1WGK$$ ˎ?mė腬@"Jgw:R F᫂\YR*K' AQ /T=/_D1VWYpk#CMeƨۀslۀs¸d۷3SkBK 0;7 Mwev@>We˛CDE+gJq(LKCp_+*:ND?k1Ph];*x|[d7@%e R,Ӂk2NyU9h(TUʪB5g5E1h) ŗɇeďDUX?Rr3B9j^G6r Y2V_-X7>KBhεuumc!kՊ^q3o8Fdv4G6^JB_i}X8hnFEQ~P~dEܻ]$ h,y3M᪽b(S+0ZpiH2\tL0%6UuЍ+V^ުG_4͖( Z0ļۯ%K}d\BA%X`9=#k`EwKm""Ș`Zcҫgv6AO7~=Ap3c#rm4ek۝AVZ(HV( GXgSJ$smIի^ ˫5ͳ/-*B-% `otwg=^^XC/M E @Y–3 7rc %o咈rmYѰA-ׁgaGkQN iu)F\ DH; $!Z+pW\@@xK9e♘<5_9r/4R[^Kmh|AfY_\*xrYry0 WV`iS}i0pJMN,͟XeMُAmpv4G A#?]K=O z7ijy@^=8 5^ys6uƠ3& hBz(j/eh7:ȔIH'<ſe5ohlL{b٫ۛsK6tC|.ƢŤ*Q@S&!_i, FS504DXwGMEInx ZP\ T J':j_Jpu^v'(:r@p *%s}m ΂ap!؂ Ðjvm :Hꯆ.gfi5G%,45J>+.7fv;#Rxka73fXfxe:%Jr@)Un mO9vTZ-G{\@)[>{nOI 1X5]|.'o#ʝtc[d `ԅj"B0{Io3s!3B)In 2.-L+v/͜ߒ#L̹}sV':f:8Μyꙭ9"0Ig0}h0MHYw2> D|kP2cYT?}H :q:*r_қ{yA5[C>nSU%@͂µꥯБ=:p3i&&"c^FjsE1gwj7 Fs:v>WZ@=%O6 /|Xg tWryא{.T8I5phJ_'Pc#ϳ}c:qq[{g31Ys <959j+8ĬN)tş5G"hSȢL^nd4.%| ?eRsSQwDFUg4e(Ay ЌJ9fZu?N+k@ˤ-GRVFf=H,@P"VٝrmuY鿵-ޖ4kpQ9x0vlBrWYoonT{t|[|"ω{=Qw&K>v(FONڛLg.oʷ]- :du쟽aoT7M5l+`:hם?+$n|S靹,N b-ZlS|X{JXl/D4sq%U诰kӹT-X"U2Ġ* 74cXlqLGIa5}öu2_k?|[rU~A=-?YR"C~I\}Mk% ] Fߔ3}L<_W$$D8 TQD8,LSp'I7Իר +ą&w]= `}giOս@sﲁ @P ҜPNLҊłf89 Hɳ7&. @rRa aLA_;t cЛLFΐfBf\1X' ZH 5wd7N&g$q࠱tB)燷w??j8Ƽ6Ym<}a zp\X@'ѽ~#XjͧOeԪ>O5 6P >JU/#NTeslק&lG2\p= !E'F&'dy$ )` lnkçxzت0ͣ)Ĩ'iϴ4CP4fWz'5g1';\dr:ܸxFfΆM8K@}UdV˵0ajcoh@|ʼ9X NkG8&jP`h5NT dݥQգRoT40=wv#HҌԡºQ5wIL |p̈#'Wc.i*!O޸hb0=xnPCMzU 1* }"Z%LkDn3bpg% ƵKΈqĥk{+Ipy rS?[:qCC?xs}aܔ'`0"u+AC.9yZъp+hDxk.G? ?OH(h_~offgBH]!_wwݛJyӯ,7wMI P!HΖm&JD2@' XHL^wjLP̈D`a}g9>FN{c50,:q.۬j \#yF~|1'47ɾiX#-0Զ^^qDH!hmrrO4Ju_0ǝdݘ>;I+cAv~/Z4u7\/ֶ@3*<zMgZ -j\m蘜P+@8dH]r5sˌ- ^NC}UW"msce>C/ )H1ˆ*&bq$DJ4 1$US)KBtAP @.uЦ:=\+kK}Yu& @Eyu8(l۪f^z~~gOaL4XD$ERcɼ+q$Ȳ1h˄ywy--Lsa5֠"o/;ףq;{Nyfez"]*J# 8hK2mַ`!VZԍ'pX2rn֢b<F9ziֹ!xf`ޏ1+u.q+8(͟n+$Tk_{k)wJ X7ai>D(4vLjqKMb2doٓ%qtNVB^C5q x W9k b ^Irt|h#ShFt@P?C0f*{qMA/2uZb(ʲ䦚'kM[`C 6hQZwb95u)}4Am$ S1Q!ԔGTEB(T1$Rii!~kd]h2W7FZPlf0/FuL6?Ϡv:6wƾ+؞&_V8"Fiw%!TI)fKI&'H(0扤 *B;1R#R?;fP=fa(6ͧye䢬3]֥ڜ76&dwFkS쟆ڰѼh;{ftRi{]].1'vrЍyy2R#秳4O ӕbCb[ϴ۝`8OL{8gd#\2doғjV`eÑEF`a4Z-f}7̹9A5 k/en?Ht'Bީ~?*NhQ`Fk2 IfEkŅELB#7mQ9E=mQ ޵,>;|7C^TD^\{zop?ݹ9޸1"h@|9D-**mOc0l4i†-0(I0P-"_iwZS v$S&6- A|iV*U+H۝rXB4y.SR›5k2QBKKk΂O|:�w̟R kn˶UnXA18҈T /*Y ^Y9D7fr _Il-raU8h%Fh\dfnNg(ћ N+_x$d'I0sDgښ_QTj2W$T&ɾdV"[$윭@ hX tЗlB%QOϛE"$l{!-ѕqֺx0e0I[ףPͫG@.tAȩN w@P葀N%K*˛{U&x3 2IDIy!fDdJ%Ҹkɔ@0b*d_ߪ p.5sJ"DT#h&W mi(A-*^lM;)^55zVj=vWHV+@o푴ZB5=?}JA` ;Pİ怫A&8(pobXíH!X eMi 5܀3jU#ׁnFo`Fk]!kuvZM3oJ>&HKbQ촔.1R6nƁR˳m5m B@ &0,)&)NT%"HOL0Mi71}1?>,o7=z lkɸ*7 wrĠ |a%!ܵPB8<&T.F0j*&`iz!5-S :^]{yؑB+ 2d̼)^,\%j{we-=/g~Ih_g:9U z痮XDu5?d:קq>Uz%RSM+}NKR2S u`ԧޥ-=r^y]4zJ 葹{⣦n@kfn3b&kGllh Eܵ>w`N0lm2UVeiW?M_4]z'ߣ?Ս|B3 8Yv@@H)ķt  QI{>.{bҍ DܻYJLe}`]y`du&}`;e5JZXgBm\Dϗ)vwLZ>lh%; [Cy {LᎰZMbp˵ W~ն~ ?ac8֫H-*}û @w"x۴N]q"֖[3/"Vƴ087΁% o1pBrQg4#FV<2fo9'&.iM cQKGC,:+&Éˈ ; h߈HEw"W ˘ fz"y+WޣDIofH2d%4! i7w3IXBq?pZC{_ 8R Sqʮ0~C'x+<Ťxko#*ε3Y=X8`bO 6&]ؤb~-fWyy"B.讕҄d*eBnW|y&'Fp=(oGF0rb^>Y:2}\+Щ }KZ Iof:GS9qBE),"(FTF *c5(ܽGf ORz z2wa K~gn;=l1~Yz/썙G EU$Pۻq'v{o~-~Ch Z4h]Ѳzh&Vqʯǩ(J?}uf ͆5ѳWwz&A:(NZWaV0Q XKF^YJ |ӭi+~E/8-n]/O}D%0{*w8~* qZv`!ZP=#RlͲ̩McCwt9bllԥL@ڊ ̺1kA@DatYPZ9䐀|0<7N WyhyJpX' V2,6稘;z?|}s&XTҝ~jᣈA Ak zc \RM k|J1#+ބA+GyhN2O[†?ɱB=y~Ճ{hDHz&.5mNM(Az]3ˆkz5 Xh4>8]m}[(-/r߲dMwr0~JʯÔ))A%"P𱸗l4Z֓Ô斴('e*|kqeb>_yY )}-E]jE?H@wwyϜV /6\CyoŶrH ԞF ,%${QK9Vo11+2nejg<m;+G],dwW,AwYwdc,1Ila!x.,*N=xup "O00Rj3ъq,}gX0S t88T*׈% >.L@:me_+)..ɚSړk&h98& ڎߝ=F2Fn8*J$Jr&c$0ZQ*aLT S癉"&R0IvCU@I.<„8Lӄ0 3Ebc$9d((T%3"!\T3݉P*рL!Aۦ2X) L=86Otdq.#p@Ja|0wm>6/Q_qQDL1W6BOfjB !26vR I&0]DE;(1"?*Ƈ=jnz+t;ON|q3aW]Ex:{^yNvZ=\_C@ o^ųW7lFiTQ;?h48K& (g_n],֋ٲ1 //׋W/ #,{ m/U[< . JHes, Cu@?dTFbUTύ 4ʞmzAn\-\r2!k~dYjO0}Cj'#?5(_zFr ID 0"I,"Si 4]reߧRI-~- V<gX^_G NID/%XS]Qfa-&$ktbYŗ-jYԿ.H /eӅ^H$8G#ny4z+jGƸL0-=/+Zl=}fŋ,*cV2%B-Ljzp/@@yvnm9d6n9Ch)pC;B@(=6"vǾ@R gUA5{FP&?^T{4 ^`N|Gށ'p5e B$_ h$^/;Nj$?8 gn@  39Aס{Ғ91D{'-zDGʉg=^vbvbvbvRj("qCf:STH@*M(N H qpLL*$8u(ׁzD~X3u=fEI$$5cR,Ob1McB"ʳ,K5sB*pLtDU*S*Z"Qs~r~atN7~J)B%f9@E|5ikҘ!$X bX1eD`2ax(S.\'m+~S$zNcc9Lpldb)iF7M%RZ|sQ_ݨfd&o]1imQ#=_ReT(]︥hW$d -Ћ5[|%=e OTV񡷥4/Whi0F>LY.V*WLm^/YK-#wħN՘9IŋϏO=g/J.WWCGfw#_uTiK<{O:EZQlt!FVQm+n_q;Ijj |8G1;tknQJ*=A8bf%^C t*H w2툹g@TVսWGs'CP&p̄\|SHH@YQ00*Iai*wx&)`86~>=]ќN,"ʺ+ۘ\"OmS &[=e(Q=?ͬ}+۬%JSw; !`tn P)-av[6TΟg]ՙ&bk1ʙn˳Zi\GDŽ`@B$3+og-V޷`KijX\(UXHJ)=&N*PaP$+ѳekib?" p?_7dGw?)#0of*64/?~pMMML۫Oz6MC_Xmcc ec5qSd;丠)Ih~tĚ#X)؛>=O*{ g I- VךZM6ij4+͇?<^.wn| 6[h {Ewǁ(JjvJ fzƜ,4km#GOߏHr`/{0|ؚ-GWdmI6n)^AT7~zZy?ƧYu1R=KɛƯW>neͩs(Qg V_UtH/\t^ӊ?:~ύgiln$>%EͳO^I3Y1,N d$6x!\:/ռSȎwB^$;l-Jv9Dn;uеlgғi;+w2i!& 0tFJp~y.zton[d}㝼_>G&gd{eLi XDŽDs8rt4G %=)35?ܺjT,U萒)'%SlD4<'eB "ݩKQx=S`yLs6ܹKlm%Ew㐥Էz$ޙ>Hɖ؜$;|,?$s$[!J,;m޲1Ť({yW>R<%6z̡3&xax[]{dhv]gڥU4>-̴s0)}i`BD}L x3hJWP\9&{e>®iL8ȐTPS'ae%eQ&v=,fv1vIu9l>>DѺò} nGӻo~ o޾y)b[57Ve!ux7וw1p46>%|1O'}jz꾝eq.ٲC[vWibo3M^'{3B SwUKR !{T̓y/11g5ĽpV+*0(Ԋ͛=w D5ZxbXW~U\3f +~E7v+ b4 0cA 8p.TS%\p y)DYPB_AP* ,֩qFU( #a 6 sd`sMWQ XY`<\ 40 kc]KB͹ ^|c'USR- BXE-c:aTP}@KfO5Hg,kyX8CeX*Rlso-$F\Y0+D%J(" V(I.K0q"%宦VSԇXkBk,$H$K;FwVᵅ2!1Lj+@@ 斵%BKcॴ08!FD5S0:c֌ Å^Y~.3= kuKF*3\D u_Ki 0Abc`WOrf.&&;щy|b~;όٞz0-葻[n3 Kǟz?ubgUL$nN"HY /ZodƳGn2{uvLWMR3]e8msoB_! 8}&gٸLI8CC3?p5ݩ&cM֏bLx3֯FH~MS^ӈd5WLKN6n6um9D9 " ikp 6k X"a8`具5ؕ%&` њd7>x-㬳Ju5CH3q,.[ĵ"%M,~sAS-.SkLmw_"mϯC}ûѻ_~#s{;ZQ7,G3@H}Ҟ He[V Mo( ktK,KKfb|;}Y(N L&$^)%b뤄Ҭ*im*!H䪮I؀`IQ{0LnO^,JYXTY+6a J(m cQ 5yD7?yF6hf,;20 RdK q(ξ5m ҨoM;.Ae޲ٷq0ԨIvu \D]+;- ssIDwiL0vUX `XfusO.[3>; hZ:i96q7G[m_t'`̠] ղs }ҟ.~2jG#|;qٷ A!zt*F\hd=V=̓bB\&.QI5w-Onw(49XH%ߊAn]TŹ|ӚaDG M>0u4mb0ٸ& # UY(y湴 93y1[[Kp}"ujjGG їdPԆyY=Zz\vxqB y~ox Z(!cI/ Bݱ؇"ߢ<ͦ_JVG8d4If|]WHRǶ;FLjg/:F ], KrcIlC$p=-+K/]Ԏw(焼TgpCKE1xpW * 8Llfbq G?f{ڣJ=մ0=  pKYk ]%JoΗ"rHX ěkh:\Z Ų-wng3{3oV~ J7Y;x.L|sħ<Ѭħ=G|J/O6' ^#Y}OyB> @~a>viu!\kXIY!7_wSP 0+ZϵCQA$NTx2!DW"X{8$'T*> Tꥣ1vg2ͅ@DDP=ioGE6t݇Z``+&k#3_%jp&%5ه(KfU^zgL8|,b K@q2PTM4GlyRI 4UP,شbƿ"DPhv>5|b =2x돣?8㲮M'WX0ce?*#yG<*<yW%rp:^ޯ{ښNrbi/8 lp3y1`1l&`򲷊>?/J9X XI֪B\M?6Yt#ZI " ur2Έ9Y#wYwdNIx9mId4A>ZReg SHT1&ӣ*>-`RO|T"Z,"X|OS*)^uy0O{BnπDj0-I) ZV<) ˪J,QHh)5*6#'ŌYFxP, c9N8F%viqid]'noŦG-/ESh4jP|PO\,qb]QZb%J^S D-mucwv]qyv(ȡۀ˙x8c=;tE,x !#t$ :%´rUqDWƒ.2 wKPuԖ$}d}:H':$H*JHO[)S6))@!&I)UJ[x0#S iTǦ糡Md{j{Þt0AYE.SuJJhSHa:Ȁ$&SklFuC`Sk*VeIŚzb/('6 DX"<]a%86EcV[!t웆yGd0W3^M8?#bq<"<ǽn8F( ) %!c.Hh Z6=D͢N~07s!E y:VQ)F76 @C^^3qsFD1mmù rS8c5U2ˉ3e )E1Y1,)ӤFm;"$}$2 ,{X''4qg( ayOFgyV2Jcs^{ 0FNc{O*ϭ#I0!`9YD|LM'?ULlGJQ#7:qJ R ~dsF KX,5 ]<ҹ΅GBAYy]'nTo̯PHcDᖊt'6Fs$ @YUXy!I5hcFE@AH# i\p!20$CB}P ,(P[2PCFe`$"_Hc8@\a 75b I@J`aآ1=y7pD:^49Aj?SKN)ݺ= <ƪQNqMGS6X}0W]N@kKTYt9T*)+&M{==_M.*`9O/|o6"+iɸ{AQꚢ$Ed^}].])RwS oٕ?YAw&mXO6n6vݽhy)J1?6c*IZ4ܘA5?2╍k|48c$:Wi+*Ic  䉄1(BF4 - j7vjn/-$1/ o)_!Lˑ 7dP 6ϙ,h/19@P4^ S5Df鼟9˲@snWݶ8|Մ22f ի%":54]x 7La34( =;B;EnCzPbaX5grvEcŹYBv_n>â/α\ye_Y\WgQkݘrii)n9;ܨZӿn. E6\`=\%^ CU :XZcT=o1[`T{1Ҹ7Y)+eEQD{yVXrI]+ -{л,19v'+ жX zHHvSkձǫ4pre|r޶k cLH7g`5nTS?x9 \gj%^`kT`C?3`oNh+?@PNJ$Qׇ P.z(R;w}RvJsMU"j(RL noDk":7yN=1k,BD2dRV90UShu>@j/A@_RA^nn΅ɟ:':nA"]/ƥ& nnprL7k>ݶ 6! ȏ-:wQˀ@?xûGl69)wAJv=ލqC,tW/e:QvLǰOTXc0:Hmr r&E~7 P_EK^£W^_x([M0>w ֌WM%+vUξd倹^WW|R>R Lb/e? \Rn#+Y˫m{eƸ Ad !qLF\ɓ2Q<#'qԫooN#vS!?`.ODF!Pdܴ;nO"iOD($Ej'mILN %ꈐϪMˤ"wEg= QD;"س4-5,D#?\*z\J66IBe=˽nhm˽gE?29߰K1=j#NF=h60]){`Gtת]{M=<6 6p=hǰi!dX"N\j9 J1`)r4}VRF({c4B.&’2ZY4 V3-p9yGю1d vXHUUem8R۞ 帰X3nUhaݮ8V vV ~ycUKѰz{K-C77bLquy<{>+xȏOO@ɐ "2r!|xj8'weIz`=%}=/bBV-)J&){K;uJM dfGD̖yW)[Lg?.=~q139\|Ç@ºg˫3?qO d-]u%0i}yjJ%L_keyMr2`+OQ.^EOq8 ,r fv`={QϜ/j0=)R$o労OCH,w$wW3؄fO77L|V751`f-nW?/MQ=,t_?2A3`Dk^}2AR q,S;pfW{{>IyAoI/b(P"_d"$ּ|D^,4&" q~x>Yh͍xT$̷fV" ay3PM~ 4{SC'hF S3BcM#Qaryjtx=WSƣ hz6\%Hh<+ F>O/X"/5ƢUIX8OXD}'gƑf"+ J;@tXa3CX(@8 )1JAn\·GӊbVJ &6P%afT"sir dΤus\R ]\¬c;\*TS0hjG~}j!Ū4VQeaed7oJRS ߇V|(w w(,]R$abqs[@ͪDpE L+GўRjnz-R BAGe Tx2D&EiWANHtH({tmdQc\.s[}i Q]O8x`wINSm+$¶٦4kż#-hAJ[VbbF_S{(R Ǻ+P$,&fp"aŃ åt"`\r$E*m17149w)%q sϬ@N;r1#.GB@['I# UesUKE”-CH1 9./`-tn89҆`K%Jch+ӭ4Z-+yT&I~Z _KQjԜ^k,ZӨFMIbfDbGD[ξW*3,Rj)-yWejSW]VFN{KcHp9}p+7gGw.2!1/Xj8[lpq_GNӭU!N7}-K qİr Ĥ5v$c*^>QKoLy"/^^R!e@E{!KҒ$1!tK Ҳot{{gus_7yKv׌khM\h1j559繑 )2!hJL6'IYm˹># HUfiY_Y&Gu. nFG7r>k_ Ӗ)RߎTU,"EX.ϩ?41$ jFcZh:]=SgS{bshXgTZ5v共;-^&YO8}` 7Y/t2'uSqUE-%6y *m{`]wtZA1~펿 MiIDo 4h GJԓs7El`c)Ҡ(q2:!bXR;E1<7##]@{5Zc7/>ueZ?+!-;Fz*&$ N9!'"H! b#Y [#`"(oV1{K ~axO(/[N`nЌ_]0?_$e4q45*Yq(g;L mOPf,Xh]*F&r> 4}c 2:\3Ψ-,y" |Vw]w/V0.a#cґ,hl|f晣YW٧'9}ĢcmF)5Me},! e} g ޮ\lq% _P?:Fn/ AL dq_6$r4t}dޞPE@ lQ|y49-Ez"=p= a«f2J"DEyWev2|4AS-0KV*p .=jH?ߢ?o' Ȫ:`L7`_3{E~>_f1{6t?[9k(C$Ź Zʐ;*#qN_q=\ 'P.)c>R~To~2V@zr-aJd$g 08ATGV0FF$(I$ ӱ)CIPV!Bpꨪ&bHweU8'V#f8D9Uj2^ZuX *7+c-l,iv FJ @V MK3 ll+vTa0J [)oTW|R,mͽE<^~8/un}kڛOds9[yrGPuWN\+B.U9%9/?^fpNsX57cr/9[giԒC$G&g.܃ :棶0_Itd.KR2 ;0M=7|FcLe=* H8(ld:tfΙ1J 6$&7 ~;;2 7sNT{rHoh?7V27'*73.ۯwaxxs>`d=/<|eKϫ~>kLAT^}|aOc"o z bЇW!_?|ANyswe#)Ç q#3< =XC֒b>?q1tߝ`Hb;AL3qx-szKl)8-b'O`<+>.y!R-&3] E^u&pŴum{&]pV E+<=?G +[UNؕKTȦkt[P3ʿy:q9뫛Fwx lRt_ 2m@9kKo.)W=;mٚ3HS;<$!ReD,0iC,ymف1VF',}v h%m)§D1UppÅ?g!RfmA.+ߦyp6E>SNМ 7BŁx6;8g)oBJ&wQPk5^Lu>4_YuUHeRsR7[qWY08˕%ȃvm9P c5p  Ls$yn$.Fx4A7<UA\moq8;1yG2Ͻ"FzP bbgۗlcG' 9α$T1„ 4sFʴZ("D ?Z!dv m200; Vj`CV!'Ma*BI]~Ţ>B\m',uc?;>T@MP%s9A3ɱ` BcwJo% 9[@K)zqߪ pVޘI<`  rƔ@ʅf N$H) V孤z_{NcMRP{ ?kXŽ <Ǭ6Qf-y'=CQG0!  [k35H2md)䠋'PO̫J5H2_jWڃ'3FA,ty1M=Bnpov0Ub)TRYvV&MK:<Φ #PM/#}"|gXo>Qx7+Qy7m{c*B.XQ=)11E뎘RCibȴ˜tiU5 ~oj  #Y51qϞu<%`;x?FC af<vBvN)+EJ:0GwgW4yvE`SE ؇w-,;]ӖEm+q>hѢWKhas 5kaZX>:CAxg /_!vvgӞ1.JEעWF݉Z]@vjpY^l6Yn6ywP!jnOP1y_f';]΃0M [W!:)-D5͏X btcaj'çY65o^͛9ypfO!}8e}Q;S)*b&Qz}7BZ5cm-7D8Os2fOLPj ˲ !fI|+Ua 8mռp҇g k?m}D7DQby&Y?9Y;r:|kGʞ1?` Z_%;Yf/&>!Q[F>,Qs mp{afYu:ӄm4nng։9q;2>ϻپT|2)ݮumOI֬M%il?8nyAVMk*uF5Ϛb1}Kcq}{>Q7~jNll q-tò1Lak}iͺa0OmMNge06[κ;Zv͹>lgBSnF=AgWОk=sh,OHUnjN,/ 5uD7gGi,SY.6Fn(5'ەK<*Q/: cr{f`_uAZhҺq-݅iHtk:siwbŞhyw}wG\=xU1}ݦDO>eު|N8Gha-+m˝`.z 'I 1!)F~O5i%JbvFI\O}gsb2'L`>QIL@}BQ>ӷ0RZKgG}/U{ /yi<,S)4+VMYceMً{NM)<[%?4Q[yQG0ܢTR%/LF%OoYK>KS^K#RGK&_G6Ɋ].BK/y"u֫a-^*I!;m=W^|r"wϚA=e{dO*ňiUf5`^#'lӞʳy^rCeVi>+ōz բ7YgxOp:?:;'5v~E^R!/lż:g:cퟱ!+?2ױ:X`*3Ow-0CXrtujRtW~=5Wg&z0fB 5j`_6~Oh:08❐!l~iήh?3Y&Ad 1`]?7$M$waT@Q׺9'0 K TS0k'zӣa+Ӧ3r Iu_m0 8BlNH&gAsM (l. nWnb 7)eYe`bWcmDA'/p'+竟mlDZFS<:}Bmp\nPa(@"Ė~LoE~9("D,BlpcFpHVV?.⏟?߆7hi)QzϜ 9˭ w Y09.VXq9W\͇|3X,Ut槳#0+3ʬ#;n 9Z]MB87,݂ %z1o2ڷfX(ܞO^<\>}曻krcru61&Ẍ́`'@?X³eHs| 뒄\N_xȩsg0uA ~|w7o m۞eH?\ ~sX!k!T§7K Y6j!fdQ%J8GtIMmۛ)1Uy_AX޲:^7q_;!>s2r2v&!,,~bS+OX2l F kY#3bɌ ˄3j P*Ie!݋dSͰ5࿋,qZH7K8ֈgX@y؏pSR᮴L9]V8Ȗf! jml| Z#08H!O+bD  J21f$%šaJ)V7B3=HdZ\]0$6{caN `mP@k]想+ N 4B]Ap ;쵢*sBPSU*1 QdOB Y7Ţ6 -C2<' Ӕ#̨I074=9@ t) Zj P|@aLr nW+Mb RGAI#-XB GpTvʩ&ipfHCq"B$qV@@T$ !t>!dL8p c,CXFxmY&`<Ԉ"Se^z< rx$ ?w49~ɑQ~/9Jï0I%di˯a¥X7<v]+߻\C8 n1a9NA:\D}Aӎ3oA˰pg,'y*EQБkWc/yk_ޱ-3nIݥ޵ %ߓ%߅b [­6KcE a\b_8%+Ɉ!XJ\;{^ѱ|q hbAए6.cSG Ύ6.M]\,ŝt˾+:E7.Vxb}q>޸X"zqDKL6.Xm\,KrŲ}(.`28G0!O1&R!$+MrxIď~dKN6.\o\,8.G#QHuq:޸Xzƾjaj=eu\}CNt-8/3zF餙&)ΓhOL Ż/ƣe> lln9˳hǴ"?]vJ$lz/͊:8v4_^G]|p䧋_ʟ+ͭGKܻ^* 6"9LEurVjFըNNS cEurzjQ jT''6թ悑3mSTԴ IN&SNN*S `+ڐ:L:9NQ}jM {>ơ~b~3ZV8r?,QgO8luiۘU|X:h6L|k2<*:Rﱦ<.MC _ Uw<ݒDMQby&u~'c_KO5VV.W&CCgȲ.[*eD'Xy-gku%[r"$S0r}@SothR1(#:Ǻx kͺWKZ:$E4I8>=mVZT ʈN=nFb~֭ 9rM)~~˺AObPFtRﱏuyPk-uCB\DSdjoۺqqi*eD'X궑Jf{fKZ:$E4EӾ`ۺibPFtRﱏu뮿#֬[zW|IV&ɔ3e%AI>֭>0'oo-KIV&ɔb[eݨVfZ&G߃ebPFtRﱏu밯xZf~M-jIVTB}}kCW׾Ee5@=ȔM5CjekVi #2VujosjЇ7ժ gdPzZ5Ao~ V9 =ujB,n&"dUDՌþEz5AkֻePcS(w5fE!jCVM8QPcjj1+FjCNM`T՘fCy1׫ Z՘9kqu1E.k̼xRf15:4AIտhPc>  Pcjj:ŸPm 5c1 tj̚xRg1EYsֿs!NPcQ@:1X~jCVMƬQ 2Ԙs ⽫1kL$jCVMHƬ1X<ԘkIYj̚1ԘA(]YsUxPcj5hXƬb15:5AƬ%iqPc>31kjC>M9L}AΦ+ܭY,:M:,NS)ly5Vk`mvi cL:4ƛ Nlb BAhi$xSrWNZ/tƯ>kc~|"‹X54^'W2+hZUιYxr̵b(Y!  A!d& Ye X$%A>1}S?d2 _!{l#+9nJRV_p1A &0(&y`tѧ- XIo2e^Ad2i5qk3J;HtDS2#qԅ[31~6ӻ $\g@$m[ΧEJFZ@ zߖ: !10w N ,23=Ʃ bwC1)f6IBP9dg,WZ2xL1fA=i!p#jWq&j d)Ae2!(52X!)Ԋ|BQI!0e; HGQ@g6$BHD!@ hcgq9Xd@D1K3d <8ҿ-T))cSx83`m=DS  B#osϳ"Y.Ǜ\lڸp@Tt9ήr}(U(SfW\>Tq,kJ'$HJ@$'T$BbK'QY$@n|s;)]{?ZLD\Cgb%@h!Mα,^)AF$S}j82t$Db" )qSɾ}\Gp鼒ů7 M'TZ'o!~onr8]'[{ܛ Ap )QH4D8@m u/!TD|?<#O %!'pusm-CBz^28b_ nzg%xO/X`'z~ױ>]?{ƑJ_zCU{Rws7/٢z! d{S !vC?~}:n`_N.x2=_(\X{s^3 }[sΠ-3c-X]O`[V{xƐkQ}AaI*E!#mo~3nxng>O]QJV&%iuntw=]~g[ٌ;V,rOEPiC^J90I] ,tg7/@cAxR8[^i~r=leawg -A%~V0Sҹr8hʱfc,ӻܚ륮d=` eYo;l4L񟢆^<MWۅxc?^j~p\}{|>!Ma$$y\]kDrITE''vTh$ky$ ~2\ ( <3h'hD>&㧁1Ciu%]Qq-eE|{ ,&c`;ˋoen}{4:vd?f2)&YsT6CcK Vw 9cv>[ϢCxʼӻ7v,#^ fTZpsǏ]a&Uw<с фT3^ HvyΦf8;b>/RhJ<6S،y2ͱ,mzsp^H#OG5w赧p nr?:u3/x%Vra.3|Ș*F)h+r$Н%4Ds$&O'E8]貧L"ӑO9N"-% 1[1$ׂ[B=}lhWVJwIj?拄t[d:eUK$NGudz6Z~ݷK8'#M4 :`A#MƁNN(DU s7ZfCӕqrx{Y?k9npAۄfc.Zkwz]#}D+ VXU*kY]Z䄞O&.x`.A  ʒmRLVMP~HE*qD=eW6O9^'\'6̞>r3# G'ly4v冿vQޑ;oGN䨝wPIw j!_s(Q9%iijkzp;|on.)cc,}q_]󅌱MFeHJWT9 o0/V _,Nh.Ź?.z*3^qt1u?`k@GcPρ4m"e}f<qWeՙ]\ս)e/Oup':3S)=@Qڶ8m\R]79oC5GR^slwjwE U>%yٸ5}qS(PK.g4 kz*TLgVuL];8gN֙F,Բ ]Ermn4-Rǒk95M;!'<*3MHKpȀ3J]Ș :SiX9ZZq@Y<$&g01wl#"UiDNSXn JSSnD\.nrY~5t˔hKzzvS]GU{/F?/lB Ք^>QcP<ۍpJ7<ӏ鳡9:H ]C"pWg]~+8;!1X==zןcUr@!:_uݥ9mݤT,ճ5> ԑ8Gѵx!8a6̜9RC3]*twƧq.z7kkhKUA0߽--&=`c˲b‿/V};G<>fsG~]gn b fn b֠jΆ9%aӄRҁat Q #A["7 dCWW菫^'vtՋEUx5(;yEoijƗ:8\q󵔪ter=_Ol /Z~Ll#WYnY(*U[}r raDiAa*[(9sf R rΨ?cco6W ]Ep*B5jь,KP\TEnț8io2z`/VOY,^ .Gred嗫b腒Z٪4KSo&oN9ʝR:ZlEȝ Tq%;s̱ҜzY5@Ag;k0aV/m"r;I)ι -S@_GE`r2( 9Í$t0nRz)J FrVЃn )6n THAƴX;*FiLˬ,_ߏӋCc÷8+*SYoGbˉ;I͵zui Ɲ6" ԠǛVP!BgPk/TJE1 nA9+F+˴_U.2㩵ej.%*׈ L0jx$M;j7|;X"C(goQY82N)A7(恃pJP ]t[ahǢAPRĺsr0d=c H6$E"fe-Fz뼃8)5*I rX` qrv߀4}_]XL܀2Q&hvΠ9c ( U~"1f >`-r¥9BT GD(}@8 Ai%PQ#PJ=6 BR8˽ 9fH*+X4B\AS I4SZS\u񀩪q_KÊ*#FǍ.bUr"PU!\ĢtO$ۜG3,з+/\dF!FYL'iNNT 2 ت`iP^ +'嵃H?”Jr̄RLq ܗe@&PdO0 DbH2~"V2LT #9㺫۪6AHH<<V3ĬC2LSx̧8 Xbt̓&`ki?T ~ "wEEP1S6󭱭jSx%|T")Э+DX)q2,@o 7R5U>GY\rn %-Vh}ny OsE}"jq15{`0UtTBT2Xh93gnsDJ = lڤP:W(Yqݣx3Z|nW{{rg[:JV`_p]=H|x[?y-h'.uFG>$:yr=8^P@,ݻ-F]D1UQNjBQi 8!PK_LG˸E:kRqU70F$ J[µ\UkJλIjsT U0UicEGk{Nf1x&܍śf6\g7ɰy'XEp2\HDyD3 AL}3ɜSg{X'{7 0\g!@zP:#}ځj?.>gn8K)3q /׋e.<vՓղ9k秤]U>RRҮwȑmpRPCzhy2JLeW5Մ$r#>GRq'?AAAAY'<(W^HfX#%q*$QHK$r4(e 5:Ѵtn;Ŭ6y~+5A&e6z;؜S p,W(jbz3@ lo=7B4Ɉhr}y;JǦ@Pe`?)ĞZZd*`CjA1\1w^c 4jI@SԇwtVd4^,^$3Ω|zy=R\SXd6辠m~:~(8[µ1cΣ FKR , il lAGZY *PI%( GX{@9}.85@p#٭)4@Z!QKyM]?i4y!$ HchW/=8 {"y{{m' Ƹy.Q\) pu|Ra>Qb\я'񣖴\칎msDcW9pdSzK{/btb603u{{7Y-rUvr7r*}T47K&iˑ ]ϋhψ9Ft#+adr%ЦoGZ/؜kWY V,'<,\EL–8O΢Gwk\WPt;U/WSv*{J_A c$\U?9[mp-8d)pg 7jj1uu父sQ nP) /tzP v0B!6ӛٛD\ ⇻-;}?޵#"e`O+or2H0}9F7ɎȲG',濟bKZ7H{2j_Ud]p, .{7=/S,>80,iZ S6]Lzsm1oG]pp] <9pFPmFuTMnP/x4Ht[7M2sӝss?9S\Q*vG[j)4R-󋻅' BEZ|>x5k¢xmXiu@M\UQBtmPjx; d)2<2hE*Qjġ&|/x)ҡ [ps8/tQ@5Iɳ! %OXZ.Jd8ke+Sɼ\f$塔"d\ijHjNS|cd2·DJZޱYϭF 3DN<C%9JF:6tg#V &ɫژo D" C4[gkۻZak1z7CW>]rNek^$NtRAɶcB`v3?g!ר\#VaF] 8:xs'HX[vAd`a*r&>vU 3UGԏ̶:uY?ڲQq"rbH43iE=*!qOҤ|o}ah0/er̶"gWgrT"en!2| ZoPY267.i/&ZLڙL:?΄ڎ?F6u_ >%b? Y9iFubnQ&~<AJ>^W7fG=\tPfT*ҽ >-8k~4?-E:Ŀ:{XP,p-:~)rjT¿5?J2=\X3)Bfq45* opf>FiiqV͙9ÙpDF!艼-#{b.kv}fhkl8m"H17@Q'(KaH AR?, ݜ y WpZWְv5yzRpey>50t-",w9D3* DKw JZ wbjJ6>u(nd߳?t|8>x+%QݘזB&rBJʍ:Ԥj̉I9 %YZLN D3pb"h^34 [ێ^ W FxK͗zM{m5@xLG {gEF/xeT}z"E˽z"3\ĩDT6A^xIpNfjVA<( h̋+hk{d椃Q<'>^T~#񊸪 DΆ_OpK V_p-ӊ<3~j xYJO[m+C d& La^L/6uLzaBq'Dc@pό4gLԧajNJpob`>M7ӥ7g֘,d)"W2͝ a3+v AUdV`3)NS!2*t'>͸*Je`q\Zr6EYΖ=.Z`WVJɍ[ Gl&Unyqp]V,Euw`߭FC4~]LSCEGn<{swq LW ۳A>K{?&}M#2ȗ^{psW(Z=J>|4lʗ3g%O&8T־kq7/5ݡ!RR1S3D7QDִZ$8wapaP{ I HNG\[sʈuV5wdA}%WXOJ>mņYY(W[b>j][Qn,Bj+B 9&N9Squ* 3RzL*R`;&>.$|tͦR@Up {dp&׸IJl?5(w C6Ɠ+a=;25$zo}:_ns{7|(:vph~,͓}2-e../j]|4 =,BkV!J2F!Ge )sauKb|9skʼ0$ṡфhȎ%JZ0&EΈ-p2D#p5F_M/ Ppq%\qB䤊b&/JiY϶rri;0׿o+ҌS'bRc*hwnTZy׹|dІ.:Q.yA 6ĐVLB:d2Nցz3B J`,۲=X(zӃ?*4՜D2%ytrcdx*ղ#>mhiJ\SS?MV&Â/u.Cg1</| @dL"κ,R[E0^M0K#::_~wtϴ"O41ʔru4OcO%NżiN<[Yؿg-_ڡΪp7jl|i ~Ŷ믛 (^1 Y{Ǭ xȖnC^ߠVf5@QJEX⠝ΝD5RӅ6xk.:)֌zV?6BmI2$BKl8GiĄ)g-JNEv$<]e7E=0IZbXZtݩ JH$:n$g.҄w6iG]jcSfHĀ(κsЧwj\y]C[ǧF"/FDOn"MA\)WFnz^g ֺp2w'n4 "E[,`~x_`T1L7pn?ϖ7І}:O`hx4l1Ɖ!Ér󎱡lȄ"? !74-%`X+55](ݞ)ULն>nkB*3H{s.؇J![ѷ߽X1.E;k4\?@[KyˌS#xMϴO|~?XY}2ZP2US @GGi50c4.NH7}?9˭g:*k dRFev_m"qFMrp)2svTpؔFxɍEbܦZS~U3 nZ[7]uՆ֍`wpޘ9溩j<VQ]YXHUtɂft #47js֬k<٬iHv`F6 ~sp# KF>57wC۠KTˠxB G/'," ⦿^/i+gXJ iZ$}yF@ ɌWÑahŶC8|ȥ$L&C8n5Cp gmߢj :]\zsĉa>C}%3ߏ^4)r^_86f߹]d b4UˆV]+)Q5 Ѣ:V!3C^r2;?w]ϕwfR7DoAJ+2ڠd\ʴ KP>zgVEvfu_%*R89{mlZ|N=fh-9Z VxQ {hj`$0] fzYfpS:s2H]&8#e$\έ.f%bVweLPQSR)}-~xAb^Tċ*Gh@p4j^*gL 1n z\Qy(Y`m1%E\"q=@5_/+tԎJe87OѡNc@ lh ᚼm$5},Uf\V4P 7ExQkeo*mJEEϤ7䆹3sJR־hG` vp䬈jݶގ+ADIzhBuo&Uˏ|OH2\(uZSR++pkK-O2(X-Z$F.H]hBuBJ7PX¯TcGҪݕ!~dk hp 警;)'*8'RB'&xZ;A s+ǏyχW>)d`LBCV֝\L6A=J p]$\"Zk\}Y5]ʘMogbwqC`.$}{YO6βP:7$|򎧆`lŻ%y5@P\CL?59 BS jED{0v1w= GqRVl(jbф$1^@ޛ}!Kwz8sCAG_I&*d^xJ$+d^ C;@CL,w}"c*V&vAM)ke$.|6gs 5/$bPؑԄRC+~TM?_]%e޻ߒv;N"`0@Girx2ͫm_Yp~j_!  ~B$z0] Zꋟ;9qzj-d?_=~BHa9[eӣp3/I %y%9eV9o1ƨy6X!G__\a8ns1~G;?gt5{A7ă#9J7Mk&OPxڈ8G?b-ӓ{޿[˨殊{ɷT2>a@dҨ ;cBqEED4ZjD'rFejժ* 4HG2<9DG 2Qx(9/t@wpjWf ;`5Rť[9D0Xq5+4Vr:FJ-%σEͤ2k !ZY:f.G\ X\7oӸݘcᙂ tm9v|dq"8,EwɈeIz*Y8\ɔupP(]:Qd qE"0ǀal^ B'blxqy,_YlZiHJ3-IիD+@V֮eJ'EVם/桞I6i>NO/O5n/1*3Vixy5M3LWoOCA?u:hg2^.}T{D%I m}W|#ߑui+i8xDfU`Ɵ ؀3sN(6Liv~GkU9o#gŌ=s_b0NjoX^_o'F0#z% U~Zql*oʏ$&֜YjƉHCBnk'[ %tJڌGq.oO͊;Q4,5Lǖ/qM)7Eq`kucT \`B>tʮ)g~W@g?ϪUkʁd,%gǿjb L1I 9tI[5q.*0הk_TuM;kTꊓ}$ZDVzfԶM#,wA'=x ʈ!F|UrPZ PnY;875]\meÆĝ]ZqK.]|j4|i͟:B[wyisG־;:<"гƓ cϳwgg wZ}~w.doN?1Һ0/K,PԸYkH3s쥹|Plu>>A)6BhZ)|ѷ!Bc-Dhb \q)v`(ɜ!Y-Ő,4JfwhlلbsqZyRp:-n}"T'1j9]l:Y#*-6QL\̽0~,Bd6.Z?zQ'rxЪXlkddqtSl ֺFf|̞]X.1!=t_s>@r!hcs?HBvĀ5ذE7WqV Jح4p[AQkeQڪ"=VO?r,yh#H HUIT3$B#YL oc E# nTzsT=L]Lށ&%l.hf:EZ8 hԖiLF3TK垁f&ue.USTF6JV`ĘY@hf0h4a6r7]ꦟ/v^Z|RY~/_ZL]9m:FӣKq3=5=$Q/C([+prk Ü8.9,hxcPTR>RNTrymW;8uog a9QL_\}$;:Dr.Tk´n6w=xƼy>{"﹠[(\xnb.JI-ϓx5 E}s# juP^ey'Y`FO-j4.Y[ܑ^4Iu&R"QhѨɍR:d/mQJKcgLH(-FK'v|n9`~mn\=|$V/|`8쫽=^Web}Z< =3_|1 m a(e_[tm$'B&wWk ?+hwqiס~LeY {*=x"ٝ "JThY -F$GDUB1OĺX$QOOғX8jӧHnVdAzWOzٻWǹ5!y/de}P5Xi()l@_9әRl?Cy*/?WN%fϽ'kA^MrsgK-up 1ZJ N$7X߂aN][کmvGYh[Fd R 6測jEcHykhwAQW+T! ebi?:0*bps1 )VJS,+xD/KJh d;C8KȢ2fhT,<b PfXHqI)'Jp5бGvެrw9]}%yͽH n d1ZݭsbYW;\PP7qҀ֦;]o9W|9pRj;wv{eJ28YۙGJjK6)vGyEj5YŪ_N0{ع>g$C!UW#(3N,r^S["*'>Ҝݿ5d Ec D8D9%$q St3k`{ ۻj{QˬY]/^յbM2CQ8t"F$IvZFFf!Z-0BrdK۞u$Ƨ<ՃR;Vie0 s9_f_#xjv[-~0TUQ}L\I$f?"7r `}p&HkGBDž4[`n:Ahms}`ҦOU9 V~,Ӟ1^̌aJq) u^.y62dkH$p륡G9K^ E*pF%&mw N@.˂\.@b0&Qj/Vƍ////;>UV9LA 𠘖SG]LXu2(5jl\;~j9vFNW-ӎUN [GFx:S@/ 7= ")xa4=L?%{Q#YoEdr21AڸB36dt8GSPM ͞?,H~?P=&-BU1`anFd/%y IH(bf`EA_:RGH5Igմy[.<"e**YN,r% TlIMV#/s_,H4C<͖RL4/[ ΄`J:#(HfWj}m?Π}{nK$b:%ּtb$褝B'ReuaGF=$ CPb ib%ߥ(wԌqRć0a뤿Q?;6D@% `O I>Љwwt6p?V _;_;M{{ |e(;5*2(o ~uU3U% F11\ 9TNu.i)w~v|~XZ% ;̬Yx0K;[b:̶7_.7,^u*㢤iMIxyW泼k*H(K2m!a*-F_cg[ utEb*ÓL9XS+zRqƾa!2TH$!OU>PtȩnGx8}~#+5yE.73( *M%#1!NӣnKlWIDZH,j Vk?BpVةEtȘ6[a챆S6Ah.m5 4*ͩBXKI{Vq&l6褖drOjD ˝J;#PQZ 1 6*VpP p1õcngg0;R{ܭKv>]Rmy?qm13}_?G;duw?K@e=ئl담}X8e\!#F}4ooOg -$ݡ]m ~ggO+^N%zjݩibFZN dL!&#qL: =+=eaiQ&807e' @å0iQ!d4#0b+`NJw0eN]2y̻e [ :4u(LH 2IpZ:]yցڂ䍃oFA@ n柴-! Ա0F,KGWֱ`w-ulBQTU4Uכ&.eV9ya0kͤ>zV˴QSu5bj}k UpB#5@ZDŔFiraMAهМF+;]*׫Ϸ-N$Y{3ZM(K]!cv!= t elX_%uI9Jbgٔ=0`,ڡhM0ߝ+(1΂ ^p0LO-c},j=QDR>{Q30SƆc8[98;ydD= iHλ㑤+2H2/+d(S\VҴ}]>nuTu[Bg舌VH ':xѦ/y|%xWjD\QҶedK{2o%5X|,tH,I(M/("?JBڟ]NhvBr3LJVX̒\KՈYRt{Lwm\X/jRC F`yc;K Uad}#m+\WZcZ0T<DyȧEqwb@B\Ϝvk`8VQfo>`8sŘf pz*;ZXKJ&cd'WOT|)/7E+ߌTE=UW I2cp{\~zv( uezџXvT DXE5ѽC33)Nd@͜M VtF֯H~(rxw/A%2Z~oSHY"hoɌ s-h˰H:OÐ#V[tu" #ay?f~9WX ``~w0|M6Q?јxEi p8m; U7Q'Wmc!d VǫoeqW(;%“V i\laa,zJ3X1n-A=@o+0ON3募94%"RZ)_F_N[ݐoDJ TY:cW%^=$`l KW2`ۡs!t.v\er%_6{lH]20E<*3.l! FGtɑMQd*MKQ~hynthd3 rFn&KfPRȦIie Bp6QZjZUٔ$"ɷ4B1͇*Y @jfG;)5 ئ{YwKIK{py3SsKЩҶOMfab#P/~א`-djo0|JIo !cbAn$Agl]!׽`Jwq M0pnT =<^ wtyp,-lǝ =c|9odz'9v4B(ݔj''" °NjqŒmʼnL-G\tB!I ]pvAt-Fg1%(\PO-@0m] N>%iMQ"C380HY2![{h09`Oy6>>>n\Xu?I?Y8+ |*48](ŕ3Pә.RJӱK p)q5KL/:X\eI+ ScbT_'FgyQd*…j#clt˾ՙAF l1H_|O`ɳ!0C0GVpϘzI1x=Ɔ-̠IbjZ:Ib5&JjD16nU)Mp 54'EMٵz!M6\s^E7 Ӕܗcܰk Ύ[=*M!XEZ~ꊰ&/<:OHSQ{l B0]F]jG5ƫҖR@9Ce) _4֡cDݴ #(|gBљ\ Qvr 2|Wu')gC@J*rywD(Vc~3 V^`D9i)BK>!mu ZPDbM< E{Jxhe Gt X=.2Sd柡#}6`:2Q锨w  c' }/r/T*5^^&KE>1(v9)F~р9I@Q`+yN1uZ16m:[cMkh*r ,=hݶ9LI"X kM+N nF /H.m G!oۺD:S pbǻ12Pklv5 75+ ٨ctwFpsyqRV%}G7<\2yhMǟ.mJݶ@m\PhZL&_& @DL{%9~F8a\"Wϰшj?8|gx6_,ՔtQ)tkvuƂaɗ_IAI$oRo$=ivXx<* &XC",DId0H 0IWeev1|&mi ;l8IGaIza"Y%tD ʠƸMk(+*3RTٸ}fM6{*F"}VN:_|1,|b0i/ҝIqAv@\qZJQ;D+hIKHJ LwD@c ߾p<{>3BzrRTsږQлܢ܀Rҫ/؆(!)LlJ(lz` [K3 'gTIz[4Qd&N1!BllӤ`03a 4IN֌.-Ax,`(*Jn'e&?C v4ut å26㴠Qhާ$]xǮqXO:n姏 a V(96UoՕtry<.i}`b;qػ㹈R~T5i{mF.1媢2*@!FمqMJm)63f-*gŨȐJHGJ4\M\{ g dWՅ@Zδd:,?C0#( ]p:nE:<w$q8Te<"w0=4F7 #@PKn$bq6R+MRVk_&T3qE>,nYI@٧J52PbςWF@&:Cq˪m_NFsąWYa4:99H n+)h!14ڹ+އ ?jsсDR4/&UNdh92 M 5Va w|1; ”}t}.VO1T=ŸxyĊB bޒZRq:CZǖgP%U:V(&H6ZGQ2DyY&]m~=+0-L93+jT{uq'Lr\̉炀Bry {JUΌa*E:P(j/sX::Y'DՅR\9 5{Aa)h6\8y8jh\@Ro]KNΔ9[gAYe )罡%w\Z`%D"*49r[8S qFkKήu]oăX?oS=! 즚 i2)R'V懫VFWv( Kj[\_Cʔ*y.([n/)ǔFъ[_}h-$(h8k28|l>E~ _UrO[*G-j gfht00q`0Ƃ*6Q!/¼z~={/_MsBy(9@hj@rn P@:XxKx<0G¯Nj#Ný?+ͣ \~۟?vyi[φ`oops͛W/߿V]_^_ׯ5.G_/o~ɯd9ܼz<,'& o{5Y.ppy  epp>25w}eoώ׿dR+l۟~se㐇Rs&NfamlYpv;PAe|`|yZ{aW3x,~/-[-Ё[+ޞf뫮o8>'߀XN7D}#/?[Ű2k{__pa+Wz%opwm_!Z[g}:( ʉFW=$hRG ut*L])yO%>=9ͅl|st/C{,ϧOޢb}ID0>;Xr Fǫ+m1%%>?f õ ^v':bc|T*;kd:x~~2B[j"1/,F]S{i/G~z}Y7/ӫśhB:_4JG p<|j(Y?^>^r~­mǸ&8Y4Y/4R u]?^N{ ?o/J_/Ek&jg1kjqK䏗2x*[p/妓94 m %$oV`O[ɃKw_'{P`<&Tg|BT%d$ׁ8ƈo y7uxy0ݸ Jݠ4 ϥi~Ks}7X8kU7t}fvv:O^;N3=u͓&>2xlp0М&'@Z]p. Q& A%OYwģWkW0!*1d\K<2AS1I4X {iفR iYڪHs[hKe&Q$Klf̑PhqqfBYʂ#X.qB*Q)j(.SKe IV4Bo-qԭVu). R'Ŭ$ *5IfF+ÌJ4YT4u)LfB?w`jRҮ༠N{wO{wQ=]GP$(lRXKKX"zr 1+DHaC9q7m%pe$0sJS#BQkqʉ=UC۩@ϬI -I\흫\-{޹;W{j\]sb{N_{q.Z7b6A]p C%4/\E2>p@\Uc54H7 IZ-40|0pDyIDٍ63҆&VVVCovc.9T;e}n 3 *lm|-CxVAz#]BeiHG=vJ]׃0h<ڛ$:&OE)0\T?7\$Z$A"lD"\)҄ gHp- P0Z(,%Nb0ǤJųǜ( 1P){|i7Pn"Z* imSn^ZxxBkt`uՇxʩ J!+q #D)eH@=N9#fڔo0QS!7`h۠j" H `ut!9HVȨc49c3Ku! 7DNllQ=*[+pi'RJfp- iwVġE\ġEj:Z}Xh|B}Av (&Sp( NʰZJւṚa7, j;^ n-O-Pk1.K&rM$աSIHT!K6؅hHFϣ 5{вQORUK$#r錂gB UV>X+#Ek3QY7YNY!H8s!1| ~*{j&:P]Pin$Ӧ".P%*…ՀZ(!24K%.DL 5M}fSq= mMmR9:'PgkeREKG1$NZF*:Tw|C(P<"dsx׸vnG)㸭/D E q&0@ڨҏ&!] {^"Cd}!X#oE6ފlx+ۃȌ 7;uJ9f)C M{޽Tzд7&7"-P[oz :z !Z[  s߽jk#^2k4PgF" V)¸nQF}f5dٛWh;ת$YBN̚6Wqc8o;e9~2SCִ&%͡&ڀ}Ff ׮sbNr>o3ryB34Y2hLHfˊHsAkAZ>w5W~-M&j;ݙqtg, ldqKϭZkkN5H^FCHyμE"ً\Twv.gJ͜l4NR2u". FP%F:! Sсp"i kFZ U!Ep4iU;mֳ uCkwol[jTR1};9{Ϲ7pbs"CAc5! ۧ8H}V_#͛Wuƻ#Тy}6/IA->Ѽt%jݮRu1SU2T+˔r ӕ.S{mr-=Zg9$Bs i[#z80jJǨF]f0&B~B)["\U"aBp9KVV+PB6Wv1E}e6bd~;^fInعaUR|FN5mt}pcdLInp*)@ꅥ #׵ k + }A)dl2/!+ v$ (D9q0tF}p[{dK"$*-8'(5$jlҢheN޻Ed}-o5bt=[ͼ%qPl|7{J(fܪEg \x=XݻU'¤uNNSppO@P&읓%y=, W(ޗ/rnVFMS&]]Iă=w4wi q]~nLe9:WY,LX#=s,+fSS!Y) ZJA 80q\Y3),H61zKm<}U.U*G_տWU*Mj_*Z@#ɵ`[ ;q8ԆN5QZ*}C$j4a6=r(2ji0{dqѳg32t)dǫ}mzcʼncUa.{PS;^638;0ܖ4b {g\K*"?45ۧ !Vmz1t NqStaCUT҉Tr|n._>N$\MTnU^1Jq&ս87Nw~kz{{}}9 ÍpFWu٠u*Y N)4> De7yoǥJiYiio9&T>{{Һ1t_00xy|z 6{sA"C74ܭ|8t;*`)YiF'`nIzOʒjW+S!OH3.K5SR qh#^QӥDpLGM)HFC6%xDf@I(F@Xf|ioDB!`+7F>&v)&pA}L 1>&XB(+1v2p)L޻_V|ހ."6 h2R?`\ |įw .Іw=9@ٶ]4xfם\]}E{)nma~&R2>PEJ׃ ? xx!XYhPA(ˈw߬I"^yf ^ HJ*yT1iP3.#㉀e(&"hy.++8̑v 8!Aʯ?k&vI "y%[!!J@+NBCwv-ERN_ qBc8~6>' ͤSAf^]J25' wz*'-do89iF)T>WG kǎ_ y*-x 0y)>_V=/R!bhܘ$nx]Sx(YJ152sPe!neL][sTI+y?%%"y_6D^  _e3`s6]vVIRRK>2fĭ9WT\waͬi9_-1f ٔ#EɕX^+XC)9ڷ#$U}==1f /o̶{Xm`rH%A^(45 SILMb%!sQ7;B7H(0怜{5SY.5>. ?6C*:pyݤcg{K~})?BM\Kf&1x'Nf"t jDkuI|i@y1JF(e o'RoS Gk,Fy?m#ͯ!ZE-RhA c  ch_ޔmvċD.6%{iIJ^]E$¤Q,#UТkbELY,Lԕ%Khdz~tzU)u(w}-ϙ' %O,b$ߵ;uѱtnvMUںqJUl{䐒أwE,pI; "bR>{񱹠LL͌{ Jh\Hff[TՕZxb h1ն^]'MB3Z;Dڟz^p{*}1ȼZ;~m))PDƒľ1 $aftPcKuʾٻs𑛤ȹz,hd 1,8.XB) +Ϣ'CG ]ꚫm["703&bIlj81J_ƋjFvҽp6b ~?-5- [+Ey{ϕV Wċ,(DkӸ6^|P̚8:ڑ+ebzrK&lDF\<@\dKbD nϴ ;3חP(tk òp@'_ {P9cPbfԞq>ېJك%/t~,.cU7lўzgq s;.}ΚS F]xĈzg`1lK>ic'2g.,]m.wv8x;`Cr[i[*Qy\do/> j)׆!0 0S^|DXbΠ9G-m- jAjX>hwnQ#0 ȷRGo"yT*}rpC0ێ?C.ON<]A.sGȜtxk`i0i6հޙEm[DmsvXK"2|. wP)s#DGY ?3Kq()c0ǡYhZP>wnWSP"8wTij!;_Ha \CnXrKPĭAScp5*;zTij\q\n=G)k ` 2k-% Y"3t*k)9.ZԳ]Bdr2w{j,1TwxB;I1'Eœ QFMM܋DfzbiocD0ҭË)w\5W9}|_?.QI. Q %̿R7VEBˢΌS䄪T}+qzJ5GTjQ_Rm4:; Ge68٩"_?NPȟ/i[Nu>~&mk!z#b~Y^/Ou(Mb^85~ 4ƍ4N'k*`s{,]Jf)vӥNc1N2IylA+]'۝J2qvlo:рU$ VC$pd;LJ͂> my(#u`i v6ƯTUk7O'y_vc}Oo}?ykM;|+Os6O3eM w7N9ۤznk~!:Ʊ_v_Pa~8^!o$\̱̜Kd!;Ny–4bqNE,qW4?n3B:,}by` 2;otKDr\B8*aqp(â8j̷{1-~ݬ^Oq򓘂o/̵?6̋ϽF6R OΔ`Q\p^eOI=F[0U^sR &uc1t=K<*=N d-J>]+OZNlC+P)>gdg?/Y @%z*WB6gLt{4GJNQtU@$t(#P3]: Eՠ|P=w"`8h sd6'ε/T\y*ʹ;/OLe p8&/\!נO0J: ֣A"1*BTFdFmgvM}r9LJOd\vqK`82ӌ6@8F.7>_̶6;dk`3&܄c"BS3cCHX4r6:S޹E1a4,< nLG [v r\$( mlvO_y<q:F},swY\’25W02}x}.C iF?+P1ÚܺkR8iiҞK &2QI^Լl%1 [`j ,g$nG2LBAJ՘{nlvLᐳXAL?5A`ʹg_l,?m t&|u/w(qnqԙQ}.Qv]&[`EĨ>7J>/Fqi*:e@'sqSr\cƀ"5#[tvb;_j_[Hlr qgF\PW̏DZ(f;q4u FAY{M8;$,;WH@r[$0J1*/଩ uH ͏D#B4aH׀RGs4fK\ή=܃J AsE1|<"aܷg6)}A>4Sa$Cqc6q $\97zzgGT8oXI^ƅ_09>O }̱N充`TTS{vW!ڜdxJ_^NG?gWucazCͫ՛wo?_]//>۰kg3*݋UI۫oķ}A][|Mm,"ù$ 6A(\aI.mȒ"N]#RDɒM/+6@-Q$wv}yvy(9- Kbiޥsre:{l?HeA_BA0z wjf`г!j&Is湰jH̑9SIj]pQ;E q+s%o]{2 wA0m5iP+Q宔rV 'ܲ|Jpjg/_nNJٗ$x 2K0?'ZOEB &$}.0xgңM*`~õs4r,-E:~dƝh2AoݓB<NiBH3Cwx~ɃbRDbNmjZf(\YNo2%^`IRH^_K)9lr̬TikC}m!)WDon"t kF#jPVqrf qVzÏ1 ^qΡkMDB8%F t=:WHx11fιܶrO54Oܕx4:v kn |YZakpm8gve68 EkJ@nڮvNQEjoJZDe$ z~n[TtժocկqUi3c?)UAT_.9MZ~}3 g(s6unoKdzׯ<7At3q×xqHCVƱFra)1g"ՔY_'烑o| *{jM6IӎdMQ|gYi6IV8z;쑽rVzed %Y#D3;ʚXTzg%/'1wu%lP}$Tڿ\~zHʟrr\3QRF&[iK ){tto2s3zT7b+ChG#!1^@ FcbוƊSTN+"\Lz?zi2J\ |bAQ:= ᑧ1JwSq94&Ѿ C(s7!asik"|s*n^758y'Ӓm-e0R g4Iw%nDM3b(pџ,9FlGB.^ Ivxj0H1ѵ.SDhFdBOI FCe07 TݣbѾq;dL{Fcsh{CCt1u&8DKE0^RT IPeEwj#_k-ef\J}8 iO1!C>FJrH)v1*p՞˱zv炑AjrD i#OFi7V !X)[.;;IUA]nC5-vϮPzE!εks" jm.WU&NDy?EϞTNxuzɣx.ݷVgZ'#ZAs!I^1\4 &,T%Ym<+DPRIpd*|' u!V]fXUi!G!Bg [+4]k]kB[ՔYe|abAUZdR6Ad5F\LH x+wZ]X0+et^$RQiL,mlĈ|N/`s7, |33'IC-}Ys-1Gw\˴̜ͅuŃ2Y U-eR姥:Tӿ. ɯkZ ^ (x f̒1!"sXXӂf:#`=aT 󅄯YGZÕW]&X֢@$1w~<<4Lz>ڮ {󠳥2SHjβ),"BIK=-|ߓZ#p=.͍WX1TuNHR)MCFqO4 )&pNm`m2u 9M,1R![{>6>:{+o$fpq('WLk-L\(l)1xh`ظMfߵ˃yIw|j<Q{d{X!I'5%HWPi9,Vd90 :EdhCB_l887n-cҷ)FnR:BkL`?य़5+  ($^qhfjf򭫄%D \A塰Z0F+n}]w[bQ 4&·ECA(-}9"0c]qZwH[W:VvM>#"ejp@IJQdx˿ ,BK34s ,& o& .:ڂ5yx3=Itx]5:_=jPgܝ1w;c=cd`S9!DP*C,;%2!c!Da,F=*Zu~sbC{W 4()h\hA·$W< |رiip'I$6S8,\DgF׫S޸!]| y]¨X_\'/;/;~$ e2fШȀЀ4!< fLG̺fʐEQ-P',ZZ`9[1;FcM7bN޼z޼e{N<GOo?rt/Ta tnQuP!TWT[(WG{L1`+kx&yt'J&[7[j`Tp@Hd4DcS|0a bT63zsX=C ѵij&a9'FswN{9Ӟ"--B +t8 s>"'R0^Ciˌ`r)r?E *4-͗"3doZ#ofSOw>uOmȚ s#&i#@ @F7J+I(&\+q`7 CTQ`CRO>R)'ByTj@~xR.x4p0< hj.?#Cx™, Z턀䈖>kAPpVC!M2Ѫ2[t8ۖmS~~WQZ͑'hb̮̽F( Ԙܸ y4X4#7a)8Tm#%%ɺwq/ ũD:XXb`8D#+u%10LOe]% g:.u(^._?sF0;M ; c_LF:Ccؙ/({71(儥8F&\Mlgtkt$ &?SR kŧnfIqgxvL.j oy(9-d9l^䡕Kpٯɜv0o!hw|}e5H !.<gCD% \B]"錍7υ}oiNsdu@ł`ߊūc*Ŕ<5 j0ҜYJ~t#s౓iw2fuNvN;/ B; {0S@_igݯW&Ga0+q3LFS.$6 &ѻ{6@D2/ 㼪p8EMΧ?>6|^Ɠѻ8pqQwѫ5\j&?i͛}g~5+8?@3 Q^_ݓ_}~zZ:N{wۯW^bo8Y&:-0|߻b`KW߻[WmI=wCmsЇrLv8Ef0G--Io=p姗gfMFcsRi]緼s E3uܓdC~)TP S桓$JZe[>Wq{ ka=WZ?!"zzt0ΣYrO!;!-ҫxқ8M0}h'0-' >Iy0eFq7_ cDS*ऊt+4y]}q@H۳0tKuۗ6&5`ynÙ^W;y] Ϧ&~ Øӏk7ճ`mQi`Km/z=]ӯ E%[< GߦY1r 7{x|6M׾}HyO?``HtqfT0FokݼΖY+ګqmLސә$5Hf i0>Q/ϖ^/R7No~u[p/`k<`"›p8翏8{3zMoo>mY}y72o?7K}Ы:Pb4oZI d#[ZzlΙ{U8AϾ&YOd0 gn`h=plEݑzeH'}HH_}6cO<%"d&xOA.AJou`M-11a9:lA H̢: )1 # *%2j*x\ŝhïs.Z~_<+g"YyxO\&+Bjd3yZʔ_])>ۃ[*_Ss7{͎Tjiʵo/a*%^ A6~6M}w7W/O8z`f$goo:}GN@1Ő ۣ7/JGTcАAuW]]YT& WrM]%낞Uy9I |yMWAz|][x=Tt]/K6>Fpȉ|MwKyk[_voInDUL81 7)eT9Hv|<|,kt&$)zI;24BBB#`I{όh 2%7t21:סMBhsG"!krFQyyte$qy-X2.\AV"h'n?^㔏>v &*+0^H|pޒ=z55_\<_P@~>;?/ަyyN^CݜpY݉'G\(8|v:>H=FX&c>:(ZY!x;noCQ̽usn1ObF_OӪQׄi͘l{bJa 3p_'+{BԡY)!&2aaou=Baxq q,szםؓl+f `-Avܣ# cZC9(8Ij{& +e=\Hכ*&mxDZsJHcS;0ȟ/]tPlv: -AwMYkADc 9;i- 3# gRe361XBؖeɔ zoѐi6n0敕 Nu7gn1{J>9jl Jr;ĮР➼jvֲ<}2c{6 \0?]U[ 5<P'P9@jP5z-Y{-rYG2e RfIމL0D/mHN$|VA(煮ù>X9"DZf%@l,06$Yq54M+4¡׺VMqepRJlЯon^;C7l\C0"jG6n][]^#Z.`_~Nǭ5j9XocK.9T Y6 Xջk^\SMNMa}-=y0mcm>u^y|u0?A ۪a4F}S!foO6is8QR7ca,C!8ʺ)BNb,xm,&JیH? ױO7)o6_)n}keiٷvܷ6Y˱RX* B$BMQ䐄F9QX!hrل g\] B!j:A4GpYİ&V%ƤId I8p}ΣڻX8\`^\tQ L>bDi˕Y(KXaPtL1U8niMC譡RӊݕUZR%-8d^Vp@(,)O,7R?BÌ7x bE4IH#'e&N2`& *ΈBPLsnB!V?zԚL ].dKu]!>%Q2Bf} 0?bw򝺞A+V3dWR{ t! UϠ9g8{(]D))2Ⱥq zHIjw/2!4Q -DJH&h%H}q3)8p,RE)Ŭt [t6vha­md{iQJShr)qr-y@Ɍ)~>jS`> IY:A.HWn&EvQ+@!hTMv:CAv(H8'J}"q#H̹d%(I]sHg.wQ*6x3Q@=GHTcQ/} ^J҈ч% XG YkFQy9Z2ˆ/1kXĒt?8[j D@4j`ۦG:,>3!dTig#}4pY s]] Z&Zϲ+=EIji !d`u]+˙V )7cdD!»P9aDD @S>мRn(gP0JU{PhŸ5{Wv4-Fcim9 ś#8Ww^!u5ڗ4 +ܾ~pȹ(םn X%Z6 n8iE NK˟/֟W  _hoRis+f($`2ݿ.f7phqN,l!'m8kvRgM݆gQ!:Ʋ͚luYSq>kn>Yc-?@cull|_Re>Wn>" V=r|J4+A2X}q7;B֐ ~H)Ys Ι!Ѽ"=9#jh߽c=Y;TzZ4J#j{ f=Ql}'A8fst5z;{z2QnSvoY5mq<%%ȁYڛS|{s  wp(>(8|pR4(8B^A/Mn;54%ڟS;3ד'WX#Mk`Q[ \ G|t5 W2f]+@Mj9sw&4pom {G>Mƛ.ʗ]xvQiqűHꯈi!"2G Jd6.T%Xl vdʮxm`*Lƍg1s }h6SʩՉji2Qw{$/ p풪["N2xBfEҘ\g#>\nk䘖=ds7Ȍ@h qS16@3MqA! j6NNoVmxeMBͻL pD%#Ol%yT39RzVG 2 /Qj\1cv0t3^-NL DZK|B–KӞKϐ1[$ŗ;E= ]̵VCn~w7VNʫy Ƿo>gl3"y]Z^1iöv.NXͅs[FV}q\ ~47w/o܆ 77*JFٛ[卹Y4wؑ?dbp=#Qh.'oTuwW ppY$15B;bK;hkglaV`< T@JgjPvP ; ѨDV&J\Ai@auНn#q" ;m43Mp', ],Vz昛l.ow.pa%\~2VI5.srC7g.jZ>}DVsQ"Bx` nėQ=<&Fґ +UJpݳRIB-6.;JHE&P;.aBKIjoyq+_epJXM2΄KQ*`3X)  .2sfZV x6t1c\FY;=6rh!ZRD#4 sD/o)_U{=!BƤThw8s-FG4d|N)0΂0#%7(ɛDkǽ0lv䍴>TR.uehLj&2q+rH[t[fLopvl3V}>ohGx!^=?C!ib=Z Iϱyq/6>/F.s#S@=2typM GpSzQ'V- &2Tj$7*lSfd\oNca_0e$JL%ei#cl?x2#,y"> l;WZTz~hSG )x{4$X0ֽ@.9..E8S(QOB驣yC)|#^S=>wp&HdS &3.,|ʬ ݒezُWK[p_/a Y{QDpvKSNltۏ x ^AǴ)D'i{y„c)Q)ʉSH 9//`% +;E&Kp$}9o/Ye E rYa:GbﳹҸo짞@)aVofnBSݫ b 2 F%%4ȥ!F)S;YW n_s"PXEQ FS+o]RjE{,FGeJ: Y芗eU!U WZdĨB$cRJřQvL)tV hf$(c5R%L5X-TB}?WV&!<Ōk˿0J}Npd6ϝw~u)r.]%Wάm<=f6p)>$R6/sO[VRrN:6-.bgRky Phҵov3\>r6*PDz-߾"|crz%ґg#4*aA .[VU^.qlHpKO'sH D:Vd*X"}ЛI4*7v #~;Gk#Pms3Bi3HOr(ܜs#C"@P^!8N$p9_E⌘J,%XOŒkn Z~ X/=Cβ$<'fIs-=þ{칃Nމ&D+;MbBgB!(D㤉Bn xG}I^#!>n  tnBʂ>N0F荠'{X(Ԑō2@w놨DoFR=p Bn.Ld]8ƙs2gޙS`*Nsx0]?~9n\sfes`W "/ S16\UJ%*߁"È f_>??'=_5n5I'dbN_/7'M_>No9Ռ(ӱ(6:Ŧ33j(AU^B0̙,3)yNd4-l(Ȅ0+j 4D(UrfyaS]T!Sh)jBxmu[c5`, H4]#ߪOzi+Y8~N'e wȰQ~x|[V&tǟ_?x`Q#?Ս]9{ arJؿ&?X狇t}m7tZr^iuxmwV} K6Cn5 oOnR-P0oK/}㎗ڹ(AL}[ iGZgv߭a&pyuWB8ҙѻqkUe ZWei=8~@[.զT鼪x1t6mk‰.Q);ۿ4"g !؟qk~}v3%q]b)͉5QJWSyXwK (a2`IJ-hJG].ciڠH)h[FpDSmYOHL7}MnLn|65KbU mí/6Er1!Ri Bv/3*a+Jg#xr_`O'qu @T1)jm#W\AjP#\)A#WZ6 2RxXc@qfSo}I^@9 Wky߾DaqD71cK7cFۯCL@ ]3ow}b>zIJuZ}9.F98=)mKЮ.zH&U&%ˠ*ͱ-Eo3tВ00mGܓfo"losl1h䮋n7läXN$rȮxa" i0@LM\! ICwW\hz#hV)**,UA ,-ʹ{x kgոL0aGH13y˼z$~ < >FS _cqz^p H؛㣀I)0 C )+W}#QH)dlV8hH|f%FA Avg!79 K+OoB`6w#ɐμ Q~Jݿ";>v`c}/랙?OA/+`lI@Uڬ3?%?ީ8m)pc GS~(m ,7#49B]wo-ҫ+XUy=VWu_Yޓy!p*v'TqE1 P]0SqIl\llA !GFxG,Nn1@ dLs@tuN>$@#%v}@"kEaCB^p$Au6Xߣ޴Gʑ&7!+%È=_sKw/FEE),M,׆#u }OfDLMFY%r!gRFEjTDzKJԽ\NE=NMt?E '(zķo>gl37fXBVޜ<̨ "UD^l@xR -2CtU2#Yg*:)Mae ϶XLȗ8ZN(2-qtݢ)|>xCPF?1($kDhFK+ҩ&Z_lM%FF]$ k ;xrRZG5ڰuD<\MV )EJ ؒckG<0(|)룕o͔5@^y'nH Pm'ۛ(^iK @DݠcHXv,s)$f8vSi7MַKJ Rv+5.}ۯ;b4\#E#`D=AT l#CpVſE>-DU}ZxSU. cxkEqPII"*M"WK6/1KUWaL3&n꧞@"і<ދI4Upo|r+nHzD MgZƑbb?G,ȧpw͗YI%8gYEuV?M5%uw ~U,ziue{ZX8/EKeY;*PF%k@ےy9窦9Tm095wC%jp-Fϭhb]M AZ;0 (q*4dNXJC"+*5Bu_1zNQO1rU V 2Zr h,Y00J˥Mϣ "*y$n&Vau$]p%Bs2ьT!F1p1QL0Z Z -FSp\RVe2:d&:&/hg@l J>dPޓI0?(0%smյt.4ZW jju/;qTlhj0lǬzV^~KV_1lci{HK=jbLArb3iLé5fdU㩛D9T5MO6zIBCВ\i%LC;ɵЍRk?3ڃ--Lg}Oq T? N Z ӥe0_k\ǼcNhp{7yܠC]NEs9F4jP_njsDLDDv]akVesÜ5xx\.9=zx¨.ODGjCD%9t u{Y?x`Tf֪]o{-̓ brР'D0C1vs7a7Tv@Ð՜w+pɕ#,-:r@tA\YA*]"$m u+VlG,}$vH␫S͎MHFCMD$tD>"XW74_:pIІ1XFz |6 3V,.De pKLpnרJ#Q֥)QY]1_.D0 J*tUuUfW Ջ )&ZZ"hx H(AhjƘCT\V T,|е`Ȇ Af;,NIdZ;m`oɼt;(hX8>f6\/3Nނ+`'j%4sqYQU˭˳4 sdzaJu\i F/!֤_-3rE0!T6d\mJ22CJ%*+ li ϸa-g;ˀ3!igmikSi+ ]31c.=TʗdH*Ę2#YQY`'Nl'aPV+U!ZW{Z U0Vu|oLJeb&^W}Z|a';ⲕ鍏% z+'ut?{p+l G?}/{kj](%O^ Ͱņ1Nin(ؙ=-ju&kGV]9O9\a_>;&-6(wĔRΐmӏtSņ1!OyG dz ؚlwMnM6{z$mc8o pjk&*Qrs}Z$Kt\i'Gs&=*{({i-~$Xm}SJxEK/51Ȇ` 3+4q(icBzWJ h\ H8%  #XSGP4 7O%xˡ5nw9Z0ǒvL}ny%[YFϿ̦man~;s7&L;?$Ŕ_1֬.JϝML<7dE"#2Ӟ*E[>ܷD99ГrҩVxJ'\Ndb`Rz ҥfgp^Q/,G%,{W?wU.)KW]=)ѼD y*Sʬ{ILc/[yZlBk5g\iWY_qagzzz+^e .^AjtBWBT2tBsMY!9rGePue 2X'!u;lsjsuA&8'?mʨh?Z'/s.턠L=}ߕvD+SlhwZ1hroz  )ry5:Vs_f8PQ|䈞@{1S(Xc +ɴBn?C ! !{At,r=Fh2ԈzJ\yLj@F4Ҕ=kއDtb*Ҡ4‐a ԭܑQvth=% b?u$joKfYbG9yԳq|Sw훋 .̄_ɥqݞ->?}A_,g0jcn9 VgxgaV?6=.o{@6<&ԡdF)M%8/gCl%sb$s>hdx)Ⅷޒ* ۽?ϙc}tv!仐>]=yq9%3p0~K:zR怶%/qot#2Bsץ`^Xb!kEVƑ$ w0E Wit]jTܺ{CVN=# oK->&w&Χz =mZM<>}AmѢ-`FݛL5|V4LD4H( UYJE2[+GjC5VPP/D.mha̞3X,+Ob]Vx_:.J1No H#?)SiSɁxZjǒa7R?9;'ٍ$; Yv'9 /wr$]D8s|i'} GF6-$$ )E8 .42N4h8.1+eYim-8-5+ao#΀WdcCU gP͙C%I 7"z^y$mWIzxSJ<1rNBxcvcfO.dT$W7ĭZ7"k'o?烞]}|c\2:we'DYr\6Nęs;w~Ұ >_:,ns .cн ~c_cM,Qwm=nz rYf@6";~I@m"r%;栗䮆7=3 w9.oB`F sSZ: KiWqױ+v9CeBo2^r%j* @Jg>/nAcOiwo`p /dS,ʠ`*UFD[)jT*Jj۫gHGA3r* Dۉ|Ρ\}#GLCYELw偫@ӕ 4@/ Bl N&m Y)Eǐ7L51vzDPʉ9?Xf 0D=NQfzR!0)F($$$Ӧp7ӻH 6 Bqx\"0 `m h6sB=WKT-]w*"3F T rd|?G feXFtu0Ijuظh Q^)e"\A k|-*]ӂ$Fm0$h[{ 64%D}1@c__ V?Bi~gk=kpp"zt֋¨ j$Rժ7_ʈ\kNe+3M"ZPuݾRw;&FRwוEDsBrMBfVjœK^{ZȭUMzB㳅$zk-B WSzqDCݯv  @tTDS!JC%YINNiK_6"eg<__]%wS҆Kff;fO]p'\m|0qiո$!cAws_JBNj <ްk e :-.z+>,N(W@EqX @ڃ(ꨅAmiе6W8N=LZY$<9-IR o2'F4c5bX|jҔexOfjٸ`qHHWFU>yAb Vլ:[NTI\!+˯XJx-2*. UP5!^YD&WښUcbnb2geeTAJR3`,M} -t}O^ԫWW1ӛ/YX v; SB^I{،?;}pF尡6d8a]qlƱ\8*kJӔQ!ِȇEA_`h< BK2FfL;DrC& Sb)w¡P56P5[wo%'Q#IK*qM-fu+.nZ@)3e)\L :X+,SМ}{iП1dzpR9OviÎ}ӻ-irڎ1z,Ѓ4z1{S`kˁ2i:b'Pct;ן}zζ\Kh գz p#K{g-3ieB B+Cv6 r\\tZZHaq}65g-YݡcS8fuڎ)MFsgEk9CٖW@w(P+Hr7$g5$5nyǕB=y7:}>F{p-)BGQT@}U*T;ضqp63Fߝzc#ElyDQy܃,0ƌL1!XBf h% zkSŵ)B9ΝMvhJ{ssUN@r`ِl6#OR%SõZiBI]޾O &gUӟ*F_֭2Lѣ*HuT1)P/;/P !-cд٢d͉Sd{;kMrXoa3HҍZ**O?wVs#UN4yT&IAH!4ZbhaHMxe*?%vYrŹ}KeB$ rpc%R=#jvp=!R#wѾ ,c^'m6wTLTi4o+YYJ u`RKZH&+a XGgLwhu9l@uX$YX`;y(Z]=\aЍЋyєЙd Q;l;z%@״vSh-HoA\5= /Ǜ\݊,2ۇY[ } 6Βϙo77S2wW?],>,^ߗ,P/ʤm8B\dW] LnpO9w٫-_%,1[P ;E{Ӄi|NȟҩSm8QB~-7qg`we%޷jcot)9-ZVHVZTFxJО/,cWΤ}F.Ffyڷ+bhup,_nC!:/}u#TgČQϫݾw )jh:ކ1dF1Ŏ/}0YLjJ*cH+Tg=/kQڈуLو.\nQeZΔlDOnoQC"z !Ӄoۧ1<`@ s3*`y2g_W3B1䈚`,at./{e?Q%![s:NIE4ZqPOHȏS4$n>ͧX0I_|b'qct^ʡCA*-ԕ'~vC)NĉWI<ſzb"DCj3TWdV,e e؇SWDg!/81@Ma@|S[q~b&Ns-R{MrT ?\ǻ@/DdlR^On]AV[HxՋp%TIxZ)>!ֿw+Z0whܪwܸc3Je 3]ֿ%~/?]“ ѼQK1l-T;xj7c%u"Ɇڇ|恷ٛ!dmxPi<{z+Tr^J]Õ )E٘6R0xD01>`2*#lSl5Z_pRp}~+ %m\4J_-nIx$'̿]ƖOLRU4#..}|r~kl'($$$Ӧ,߀eAZ"7I*"QuVZd66x9{; ŃCʟ#}-\2C-ITR*OHpJzGAZnap[VN &QnKX)k4NA4ooٝMwǻ>|zCL7y}|Kvs Of_=~.RKs?7?~#LVVOt߼@xw?[mII ô=~'x‰"!W~eϼ\I#T %v<a[ޗ`UJ) s//}d֗(F֓sEB)N ÉsZRc XSpkɰk7TWyh#Z{ U o5#lVS38RF=Tpiے93_KO]L06_o\PfgѮ-iLޥԕ' ׃I5`7Yod_On3P[zQ)cCx`a b`֤ (0:6$`rMe [=!fJ0^Vh@eޫ 1!gu(ԪklH3F~y'lsBi1*"r3<f Z*69jFc_Cyϱg&(3tMC{SȋMD :ޅpNpNpNpNٌ6gz㸕_bG&"!l-@H-RL/#M֨I~UUӥe%2pVTh8] )45=A_%ԁzSߛ #n<'O-B7YS VBe eYBIbY52W$+ )S#}wzjNѣOwoOǖi~¾ HH[<XhU3D.X0B (e)Ȭ;X/$3@xֱ;|әY;a~kh)JRFe?yK=L "huy%Mֺ(?lI1s>ѣk)Qc^cB$=3w0w-s;Y*l!SGiyP/mܪ2Mu-Zs8sƽmds-D{M tG_>4A q(Co{?wF hx79x¼VY%~}Hܴ%U,7=0 y)$R4nO?e9Z{P,U̮b#zO,5,N*,H9B~yԲŒRxoj40`btG=q9J=t~H$DE#bm_-,ҥ/|OM\<f0y-eM<81r<#9L_Nr!Q, V,U wN yss&F `W0좬Y4eP՚$28բCKS$@5gT5FW9k&]m-$Am""#r1>.?4E}~&VAvCHj,v9B {G92(EX y(ŇS oYc٣4oaWY+! &N*G^VkN 䈿.m}5q\B#5RƘ0ˡy=K/̓(=]}HtUh%f. b0=1m +*9QB&G;Hً@zg#xXVMo^ (&ƣڃ0Z@=u_ZRrP+nVut|>^$M\&c%fa%tU狛Q9;yYSYLIU ˜]pAaEÿ-M2Xοidr1_2VzxDZ0p2R!pk9 Ez}p֣H6Qh([t1L-&-htGPy 5\1>}b:K[9tV_)GFтotm,)'DCE(lnk*XIN9qZS!mR fݸZU-m%W#= ?Ħ)MA YFm%B<M~Ƙn4Pd"&JxwyH39zFV}n{[[nX,91HR ' )T}VB#$J+W/Tp3ԁ7CӠt\gHEY3U܍J_U`%;gh[J6x AH*Sʒ, )+hh8W * } )7ZRJ,-4DJ5jV .5uIW3/km H& ! +LS+%ǁR ^;Ps2jQ%ᬠ$KW0G,pVuYq%c%Ѭ䊌i a f$,IS5iZ:~!wrBs/\I\@miyVƊ<+tP~CiAA&m ''X5:!WiBL5䝴R FljkjЮtN]S*Ŕ[ N"k#0u8^ n)𳤅Zz 53Y) 碌8cKLjԛ$I}'CWb͠FLU @XN#UMYUR\zס&kw2:Lt]kG68VL*'+BFڀ]~hJNJ$1sJR+US5']RU\P+h6@={!ޗU VnJ)4mc[u+52I~1kSTh""!Mr&GDZ۸,-v1׉Й`J8!uQImb뒊Ѕ V^{VmMCְ$ht[P7,9ІCDeVWbZYgm0inuV>c6}0 #ImBizdfJ0np p_e83h 3jAf!`g!nȕ&1]A$e,9]/\lS 2G[-F5ï[q&89 %zpAG 1a]gd:| P]|ge;T*B aQF7x;_ӌ |GΙQ=k$cI-0_RF$k6O{&6Vʫ4W/CΠEq2˕_eŸcy(In ?GL50dk%aIM]׃ #u^u_JGj_o\"ɷ-i)4N~i೓_.>}8Z}ڵ;Y}o'z,BB9r/ɺxd(@iϵk>SYNݪZ䊜 ?mEs+Kq>#ÓQd+ָl>ت jdt%_vx_{XqDzr~/wOъERĶљ<1ggh[ާdw?O2Q-VCۃɇۢr'4y\=ےBofGLmq7I9\[RF/%v= m|Ҹ*!ͤ%w%>O/}W߾LMu)d9D9~1FߒO2Ɵw>#S320='߄>iֽ) Ђ#"J39 دZXJ)*3: ؿ,me~X8~ɰB/?Ċp42(/ڏi,qZ$39Qh×(%Ző#R\d=n(GO5l1 QҪ+;Rn+<%芋_d,p-TE<؞/pP'a{c 0h ú(7]N/?> CxYڪ:4_"=%=c|<-O|5jD);#Jsdg2g"w]>̒Y휘5Nf!{coL`ˇ`@ؙC0 `rK0=̉ %d<'ّGimƋdoOGWONi6>ώG`]bvOc|ZAetF-CHԊu:b5蜒ZfcԣFQ{pJgxРK*;_drd t.&8'4( '@aE$ 5ڹ/pY+R$2%=I #p)Rk5*ca+ "DZᇊ[jȑIy[]EWOwy=:H5ybYn.DTɤ#QksOC/wH .<0츘%XDNxnTy]"yu)FcĪ8凗dJӍX%(7d?mڀKcys{}ypėc?h#,Ɵ.7ժѹ7lkw修})jwWz%|'d 8~V;__~:Ɠd޿vwˏlFfnԈOѱ~}MMS-oZ*̊{Vw"JeW΢I<%etS9HT BX'u65JI>tKJY4Z%j4Qg >8ˠ2:!N}稝|~9Ӗ)ʝF➑ÅYv=6}dѓP$Zz50xn$ڜ 40HS;&`ΡN+]efO~_I!ww. LKxd Ld֒VQދ"#NHi Ydh)$\M%S X2ayHD"沁XOIn[9]j2g~VJ]Ǣ8>}郵M͓ysuo:yTQUYN@c>(l*hvT鑪}U|u*@4BbΛVᐷ- 2Zj>e,|8$P\G1VG GFkU+Wgd{ eI&|M3,`O%0m$G&ZyD`cq0hsڍ U7G[aJ6l-vaS ]%7pj)W+, UϐA9{Aaq({B]G d5leQq9K%ʹ `5AVI(e,XSu.ezh2@0MrzVQxT'W77` =c6.zy#ݼQ{?Rafpbg gL|t˂Jl1'nfcԼ"`F ]0t0cmOvJ YvrsJ!|"Hrrhh²VaqTdQL5d%dš!(=jk-/%h+|ۛ<zsyC6թ{,wyn)k ;ȕtךŇ9eg9\/U5FW9_Vjac tuvdaoz)-8ȕ҃Ao]s۶W4iecg)uϜdx@L4.(Y,bg<]cCluVJܓV8f"|YIkTN5TKfzQ0.$1jTڣ$J.:lD֡h@B^.S rK)PntC*Kr3Y;yF7PxbVnw9OG)\:xf+5clrW2;LjƬRFUV'ko-7NOՊji6d!|>3*`U1>oD; _2xZoq\#Wax_JU.z!< |Sic<| )P.'Wh.~f~2t!t.CeB!͟PL*@,U(R()LΡd8fN#OɨPWBa>%o؂ES++ u? Û45W+JAcL43Lh4|sB$8?\s#\!R*l@*]@*Wts^AA<< :xRWgS"geB9AUΙ !q[jlnyX!sT>Ll,1W:(Y ,c ͌3vfS;\Ք0H"~KumI5N+;WaIpP6'pep J,PTfꇞ1SJ]IQ-]z U!)7Fi,]Z:Ƚ4Gi@xi`NQo hfA(DAl&yQ3y58'qwoܟӱu-n&f!q8{< J‚]"O׏*7LGA0åk%T ^t.HИ>;«0,_\y~gnph 8Nj4 hx7<56 \~z&)KlB7KNEC BR[p0nxbFjcЦkjE[RDiUznv'˒61$ʭ-= 7={gwhҳA,DH֨YmB"@AM T9J֕%THһKjCBbiHѧJ;7]]:R? Ì+Ȗvm'>uJ7(uN)=ՌKڿ ӘāfMF0;SaTB_5^y~_|+дRFT?QZ\ܥ#+OpY%1?i:;=9rLAc`p*$?yN9b~c _oFfzj _>!Ŏ]~<>z E\ } g^~ܕ,Bq(k![ #upk^/xʎ ^ d %y@F%RqPU\2a֣ڂb8oPvAJZw=4 NI! .ɹf8$ksĒ8[ D{…-ܞn %n}+TzihkZPbzyVn|mH(1 1loUcM gm.ڃ"ΥPy%iȻSMjwfe]?}t?BqyScAhT- >Gwa2z,{Pu8$M3L8cǮ8tǮܢpAUթTWUMpH]ٛ}4sEi IV,-T?wFjweÒ&|Q1yH]䔂oȼ֒SB貱Jr=ţ:OTKɚIT`ʼ:?aFj?5~2*ˡIrޕ;c$m7v)N.E&:*գT=J)tDLٌf$|RRßcB5ЮloPPt~LFI֠P'l | %O9ū IWxY  rLM2CT#Ssᅏo` a\i,$,0^Q# |1RnrMp.:^_}?&SHt) #OOOSوCx$*Nu$ݜ};p~+d 4++z 4yeE?XHZ,敠>٭x1W(ٛ3sS C{~k͝STM~q9G%z8{C$HG* qETsYW#&3 -crף(76[ LҶ,vXq~ydE"+Q XLZ G-nLBdw!wo!w;d4rWQ@W=N@Wwk:4Cq_mI# #tñzQjD6N;CnV ;G`#rvw LFHŔāղTc6оqv'p{t{zC0dkfJ;~}`ZlS5E%_zˑYl)S{/#xjXD>J[Tj_/ ͻ_.t`'ǛkBVFYhrG[*ûeOS"EdQSY|IQ~u/"A'E7 h3*J$e=XEPcvg7(DxdM9~MHWJ'O7/T])-ԟY5i Mmʨ!_k)MQ@+uH PmiMUdN6ZNjk@5@HKט5VO';=L" *U_CJz?HcYw4<1nhO! y(Wcw φbǯ_(Fl8Ùn߳ Hw [Etg2z&Oh)ٔo&Tt[܄7ueD ?ns 랲2FU :΂ l ??&B]6*A(Fv3!+;]#iV Q=DPnQSx1@u8"l7 tSw6y}@&H2Cq&fHrOW-clx,Y_yČib#>ND9`kE#Z?E1 O ڌyњ+uY^F J@*bK)dH+r9kNݧyU׿÷p{JpA <9ut!: : ˪p7#R9NeJV 9sir$oFWd˜J\S<'hS:̣%8҃L|7?U~OGy; G>o< =yro(G|1SܖSVtFf*%LkBr {+2s%q]4jJ<8 =*w#OVk/V x: -cr[X F˝=]F.9XD|ꔝuDŽ2vym̘Px>a|k({zRe͕܎>?4q׬vԟ|㓫Qv2$h[Q)w'S&Ϩ O>SyB״Rj0Z {Ir2 v:SD魰TsД"̡-lzg4BYJPi!{ Zh 78=:N)=,2Rδw>dT8F EU *<.9-3+O@ D1"t"~ 9 78]p+ E(ne!'gāGr(sC$Zz'4sQWIhHQL l2@rsaPx!ޙe;˩ر+f0jFh]54՚_%-w!y9[h,u݃~fyۇ@hG&# ҇}S{}*լѠ'jt°rZ~U?se'ӟ΁zs7ц\\Qf8,?=>8TQ8@~ -"qKֵT!Jdo-!D{6$ /Y~pYp`觤_5I)Cf$5w6)MRZob€LҦwj*e@lJTb!֨t)pܝ `t9eEվJ@kݟpj;p*4b|'Jb{W WZA$䴻ҠJV\Z1.ce#",Bngh Y!>jx`she3B2S̅$$Q;ß|WBKԏ |w86 goʹ[QIu RkC Z` p)*5 6HI5mz*jA2,*j4LW[x7f',R{* ?8+]h#8?AtqpqnTidy&ө|ڗ\CE.%Do.wy6U(U],HXgXESBС[u]P&5$Tѓ33 0: qTC;XіtFU GȸGsp$biSՋ!UjAYi Juwj: z qꢊQJ/+V\P!i$$P#0 &ri)zEc  ru"|+B@O '|S׃?;@`Ό!Eh~e]",}̋XthzuR Ζ8cX B`U]Ooxy{3 gׁZoaͧ3>Azx>>No`4$XSqdogߧZp.t?893LAEx_nF"@L~(#EgG- FkwBF YuPa΂ ))BJ}ƤxCQ.4^M<V(Ş#c@|5sMrOC t&.E/DX A;x%q!zVԖm_8] EW.LU-j9zWV(J_ѵC#  Ԍo@z@FdpHdcy[)>2v3f+[CrCE%=BU)1=T̀GB]5ٙokL+R ($R-/_Ӏ@+-e&>Hi4LVI Θ!wKSָxfxR 6Ί- -y fMoxbi<{㳙MVPzb.Xp25K(zV t J@蚉=%JqFPcM8Q.JE1V8{+dW\;^>&cRWoDeˆ%[~Xj)?fV\ n(XnOGB wˍ慎7o ]4[79Osm ؆VlۧVQr[ɢ=E4kB{uTD9LzTӾ"a묦IMybRS&5~mNrǻF5|HQ/0vHИ2߼}ӽf0:׊B)^]$tC>S\,-{[24D"dmTBϩFԊ(PRj}EK -Rgbi9KkzPTL>-LCˍXkӨZm䌕U b޻&U$摢t6b|_Y<睄͛&-ty[A8=<<(RƧ+]}1k~\H!k!ON/a2 td)]K\B#47fW|̒֍.{oovi+ sY%܄iHYGhO |wKVd|DV)]t;ޕR4M'gJ.8;g"t Gn8a5H<&ۗdJ.8;gѽyJRƱxAq9o8@]l7B$eH֌TTaUC:CO8Lft|2qbje19n']׃R6<=9%pQ 8l4Jjf^Ixߙ`O\?WF,?&Iz}_ߖ Y4<Qij$b0H Y!.DfW#*fȗܮnSҬӂg҂h!jUf6H&H_[יGυ%@?՝lFN1n{%T7s}<"fj]?qΣ[7HBSNP\)4yd9\K<l%zĈbi/]lS l&vh&4WΔL7x4DƊbX:-(# <]`o73rWa')~xu)J%Ý$<%O~sX[(LڃkCYܕ=T~`޼pA 1CϤ^P^Z GcA@c%G!Mfbd-M-Y.6Uy\JPb< .} wM9.=o.kj-0SL !@#EԄhN %FsjqJd(A -hzn;gh<ަ04pЌc 0\^UCT0fM48΃$&'!^]jPE"DaȻt hRƪ] jyC75fJPgrw]cZs+Tw)q_6`Q4\7kԘqڟ"ZGr§3v?t`&O#m%I1cٖ$<I6D9%"Ԛt AW_딡m̊0m9}1VpljE13ԢB21Nǧԧw^xspcY~ރMU^\؍rGdEW g쳝6hlDIdw3Q`ɚoࣥ'3?*?&RԦR L;YYeHa`/~ns/9lx @eRٚvYܲƷOv{yp'Gƻq1I#=~!SwdϤL31x*1yuL60/+rªԝK8scxHg dn ո[Õx)âɀ\Ȅ`0rF#bQbKl )bKZk:e!tWZ1QXe۰c#9"D{x&dV [H"{D,VCh(NoO-v9J:(k{b~PN`(۾8Er!݃Dl@ԢpF?k˿}x݇R+v{V?;֗l';T~+!ɸ.ܷh*4!^-G,o%}PLhQ*O%%2X&2ؐТٷ=mXI[ Z5 f4Y& SJ6k0'1BbP3z: Dot ;~ׄ.\z\JDT#z\Ps/hͥ,3u>&Ԝ2H/\z\y3>Ę'RLS)#A*^' O-Vhinr)όr2U$kPkB-5u~g g|!MXܥϦ4*yMԾra4³&f\R~^lͱmo`3D(;RÃ1AzI HMrcI5slv ^k-:(n5Pm:J(kدj{nEGߎ&0ؖ )L7SS)5~{rT Wit|@2HJ5}s;G7[xF=Lb|;кIЈGO L'h. ]<)*F-ȧ88j*ThK(m@ `7` +fL@g28C>֐ëw'XS+?K)&{;NdRV4NRdieQHŏpF/) ZXdᆬcJ˺O[D'b>JCǩfD8qX5D@|`[C vkhwT!!$7`*z1SQy[%oGmvFٻ$c'M2xI\ERpǃIbB8&-#ʖ櫗H6(x06\jD(e 5: kw!DLHYVf_~BSY cj2ms.WKYVثgN9*|If$ÂMI6XWK-֨Su)Z]΂iw(jE[+M@=N'YG뿽&:ze1%0;DAq$hQQտ7{C3ME MR{:t2+d%5܂b1I1c!Tً "QIG((3M &=cэ: 7>tjgM#q- ˼uau:A9pt77vJ-'+['i:>~ pz T߿o0:Rx828WEQ>Ya0-(_{Dт8|AcV[CXE=%b) 8DH]mo#7+6=k 0_.;{Xp &ɧ"olّLW4v˶$"ؑmTXU,VT"uoؚc/~w Xlzqk(i˪v#'ǜALJsEUSj5J>ӘܿISJQ6zYtm롄qg6BÐA6NԐc4$ab TQzÜHI4e:)g=o 290杏4.m(՘aLs>pF5P+2 ^ tNծe/tJ1V[gm2|ڽ7ţI12_(|`%-T3[[SLY+ɠ1) q#Jz$> ]؟'Qd*3, V|)p#^pʄV($T&оkJ ׈DukZW.CjF`xqH GM>׫՗+LÀ3ؖ"TcZ%L |2=O8W4% E52BHV$qIFPq N,ո%˖R2;p+æ:ȰǨ[TޡEY g&_u\j1)N9T"Z#)DCխW$]$&bEюEvRsOZI'Вܹ%y-(Nz_(5Dj:}sUP͕IFRdI`+km[A90* qr -QTWS&y@GfYj%ZɶCwƽZy(?,1tY֙fG[B.sNK,lWcARקx-ӹYEnt .ӹF/}:SPQxE"-J8x5ftֵ_^'`BaլSsy,-fkMl ]Uō qz_׷[ui qg+.úyżՁ#* ?Q{~~Q懇\ ݫxTa[ju=ƚ,qm=F.=n.޶X\_?2eѣ]wG:%l£$_Uo%وώ_٠_WWvyE|2+ ٠xXpK\k׫V1p}wl ?(]}v~\ͻEA,3ǟ[P u di~aIYA[v6f]71̏άm['繢ku lgm^ 5a4]hJUHmӒ͎w| *<T,4 rE(?P)x8`UDl?`+ҭ˰ۮ+Kv<֝0`+@N] @Ψ`-}%|ۻRsFS QǤ3D(8S2`53&$N(mvP5V^HjK#JoN<tARp0uJG.׋0vح͔ ҵPւ()rQ $_A'H0PUL[OZg*KdvrVb)f g^ Yk=NvQAəv;]R vQm .\D) 9u!r|Jn wDre5S2"Je2pj/+[  j2}{ BX(z#Az|~}ysP ^}|pA7Hc~208 f%KAKB Nr% ^x%69Fy`8$%bX%xw=_|nkz!.օ8ɦrUof$Ǯ/g"ę]$/\u|zw?|R?:%f,Qݚ|WViow"y4ӚW% TC$@#24Nqō迡̬ ;3VVH)~QXiW u!^k2C|}ėfب ~0pLQl-Hmv] zuoToe }"¯QdY^f'.^[ѣ3ioK%Gm]S%t׶[흏'Ahz8u>{`Ex4q\J7;ɐ)yFː)^c J ~wP"?p(Cx) ^rr ^ek6}>.FSV$y;+Β(R=ʲK'h=Jd䃳r'Vܰi*Uc==1o/;ѣ3i+ \Xѣkq*)X] .-j*铅֭TS( s(H{nT~PpP\8%]n -D% K   b0+IExǤ!70bφ'P)VEshĂ#S6G+=:1>L֕㱮wWrˬQ}$ZKbtx0YJ^;EpSVZ|'J"{su+.]5I=;"(k$yd4ZQ''yRp|bѫN׭Z(T frE͕EXy> ]d19IyI&#7B,PQs3ņ]5[ڡ5[ 5qYGr)W! IdI mRͽN(`aդ$Tl6NFi )F/kTà40ƭw@ hF]s$oV ^8N̴;PԈ/.Z43P `V{OÿFpD‰"FQ29ݲ](BQ + X6R` @瀧Fy'!V\P'%]HiH%F=x NpĠ(H8A`S?"n:8{SJ&ܢ-+$Qq&,i=)G T̚ `yޭ@V Ȱ"P=G }vZ@sfZdo6Ԡd<+j&; J&?/\Dkds9vӭN'nN;hWZVڭRև|"Z%S}@y÷C=|4_0,[I{ۢ!ǚKΔP 850&!AWm`9蔖E(}'Ǻs<癿 G~knՕAvE|n*:H@7~׊|%׫p}wl ?b76YllHʢiHjb=~#%31rF}D?2L:2n8wwxsYo Yql.㢊R ŠIBq/+DU,|ai>{mG,EWQ˷>_<pNF!Uj\"Rsе[4<<)<6O]aLa20L$Iӈ#F"=̒'0; {'w^ Ⱦ;KI4|pj6NPw R>G/L|Qė<晴-%Z䖇(AjIIJ\ԒO|v/ RfL z]Ep'f`1~ 0)׭B8S][s7+,dq9ڪ݊8N^ba0 ))$e'~$%AbnRlI u7"/ 3J5\^L%ł֝=v.}2U'՝] l  rlU Z1oxXѶ9D3Xexl2͔eYJDY8xCs}rH|}H\=!u"|-$#~"3\ ݾvֵwwfqFzx2(~MWiIg.޼{ .r5%\tyD`JvDD)JQ2MF ssX2f?_X?(  mMCXa(ڧb0w5: `Q wM|0w_%A%W>е<j-E;cN*E]EJqHuDDť7KP7H:HPfZ,Y2it[JuX SI" 4b\Pr5E9ہ=\K LZj iB<1Ȟy}&Ѡ$> {KC+jOpД#8YD2!q0u.l=92n_`G*]gϟ#Q7ߦhV>ݘ+8nvb~-eTĪUMJ mJ0j7Cw`?ٸ5gWoԲ Vol|8x*nj_p L˴qfVym,lZ`1#,KZ43lg3IO}4Ҟ18R_&a5ɨ-`ϐ c3)sU!MiB"ƈ@f1 ivcN[:` sJTCVOU&ƘYX4*#F=)%Ҽ&*ʹMfB4h|JfE>#8Dwu*KGqE&J}TilEʱ`F0=cl`!9rC֤C x2I>S3D>;:S , p]l[1CJ3ndU1qPQZ3#OlW6S:7)$G0EG ZUIĠ!(EMF+p3rR 8ER4eHĠ>ŌmFkN#!dwV_fP{xhICk:㬬31{XHD[ =EQ54g`v]Y%"49oo<{rZY:\Ank:,O;'4iտS010y@tPu'/B1γOk~7PO@?TӾnSaNajh.Y9makҢ:S<=ƱP3 T 2A(ͨ@4Ia%K¯@8 Xta.~0%UMeACN ໻{_{=l}r?s4X.;< gغo[$oGG~tGG~t֏;ɲ4\)g"Jk)"TN g5ei1Ƞ"(wB`~]Bp:.}c|:`}U3Cl"_r_]|tЎf8;'oִG}E-oF~6 hxK_8F 2h-1ќ[QjXXhǭLrJ";u$NxԳz -}[G|wGx(ޅ!30UVkg bIpN?'t3h!UqsQ-ީ%wG:R襤lzܯxUTj'U 4|iM#hP+\<7HK䌢4ȩ"7@ x(FAԌ]}L9hj}Fɂ!)N.6\.%J) SFꘁ*p/^]%Jab2aQ!+^90B)!rQi7DӒ* V @A̐zt2k׉$<9miaE3Vđ*8oi(55#]F)a]R I,ֳbƤmh!HXRTA -fZ\r+3[ojaTRDRt0PyH-Y"S ;ByJ ݎ%ex*1MYs{uVq{ywUAÉ |ra%kYXf%/ 0&')K\!Vf9ed?כ>_~srFd‡oG_˻c%(w.pjGAX ː?CYl̮rh>ejz;ꥌ$c?q6-+a$"c~+,(õ:|ߋGf\1I)AgD~GS$?PK1+&@w'C$.{zs ìYYZ˜WV|=љݙVCvvk>GoV\{//7cg7{0fz'3'S d\!qӭ{!zǕz7RwQP,/般VDT+7r!M"Tu=~OQ%:3/9@FXCuR]­?ȽUTàغ7eȋ| I3#w[-}pf<ܺ\ Biu%=M\mV kHZtp'eGAQ"[<ťSý6Oښt^&QL2I,W5wk}\[ŵo%ѪEX}4G$Rd{Ƹ[#.Q'bCNGMoM; Gsc!ƒ<hQg{Gنʞ|\Iړzߞ8\K!K4q~Q̾(k%SFs&FYBtWsQ#$jtP{.."_|W;*<7o)ͻX.V0/뀿W 7{R@Ո?ZJ)-_{Tb)CYEH.8QOkΪQ쿟fTQO.84}n-lnR8H7!-%-@vE`w yv5 M.8B1y~mn\7*/%ĚެDF΄zp^\TgV`,n܁]ʲ '<6כ(;Qp 讣ۅ<侫W>pr85 [>p?Mt | ҫ7:|r sR!A7 xJ3A); C5o"Eݐ A=fҵsJ`T퀋j*^=8\f3~got1(gWUSh$ _ !p*aRv7 B 5OBɕl!$>SBZ1I|U`;[f򋏬ή&+.VhLӛ/A}3k*i%AyfT3$AK mAK-:4R*Mڃ*:zG=62]1OoI ;sz(HP㔓QL3.' X>T9`$cTP `g|zw̥8yKEbY͸Z5h$\ Qİ{*NeOG9 (B2@m/fa?|ihԈ~PO|2,w+O2VG^G^G^GU&51Ֆ䅣Y+PFSA!1JDƔ\ 3v 9|R_N|df>`Q;a41.tyH|C>Xl3BFG#%c[uI].52(Jǹ7#d!y,1(M^(ZP[ZmE1Y3O2R~\nG>L}i3ͫ#pC>wO~Y?=(nVCL7,y??ܾ%SBH&/n?̗xϊSADdg/~|532/輹L§8[?@|;3_|:w ߛKxOp<:BIJ1忰>Lp[>z)X-jHRgD(iQ09:ۻ $ON=̔G^CM$.jp~RSf8nT}Sh9{;^\6[xЪv ftCvq ]ñ VӨ]Di!DЂn4fn~+BkEHc-׹#ܽ(ϴT8xhItfS"+Uà"3Ɍag5 \7(3FD b_1C-.\d][o#7+_vDu"؃`,n^fKl8[Ժlݲfl,~U,֍$&5A;Ji+؃DD(Tό%gԔ- "dGbSX]M6t;$yL+9/\Ç`աpE)CpLskJXiV0%΂RS() (еyQ[Sr$TB( Y0v9KŨ"-q6Ed:hg7 ^:KFQj+VnmeIɭ2H7Qm%qR͙$\mJQb.KEcUԢu8q׫ZVAJ-VB:R PH*-B(8@W̚[5&ՂnH/e3ߎfL?.bEXxQ rhD]u`e""7mu]1_pBUƒ?oCk2CZe^|uw~3_̗4O=\g|ATc"s4Z7s~A_=f[Nto;ԈL%e}N@0ĊVbHZ?Nlpd.c B9JfpI%ЇQADunm$渂܎7Z>( q%ph3 3薜wGbYiEEBc'h~x2|?]KiB$#CQ)N;澌~@fDix1IQblmYEmSY"`2/HFuWoRC1=|vF 7-7%L^j @ )QI>iU2gg#F6!hms1zqK)(lSDKFUV2EoI=sg^ Iv~=u@ =xQ2O[ޠۻOCJŨUéP#̪dEnFdkvEXY%3iB{OCZ(ET?ݱ'zJpy.*Σ\?\0W'r0 ru<1 ߞ MkP^/vQ-v/{-is9f}d\~fT.-W|2( |s= mCCP>.՟S;#2yxME.ۛ q˸8G}5˞}f1aO^48 GCAm%ޔtq=ޔ}VtڨS>5jjn(ы:ݵ^g O\>|TEl4j6`2&'ӉҗETi"=Bub)%IHQoŧ9kw.?tUBwWk>"o"P^;R4M?\Z1s{{w ޭXW0 99p.{"7n*6Rh2AÐrϔq3jhi(Ǧj,5+.+n݂vLA$V#kMӞ1 PU3{R-Ӌ*6.r_ Ǿ쬞`Nۂךz Q~L5 |`DRG >~Xo_}]v3~w$mغOJDwu-c,0]]ۈRq#W'r)H˟IJn$ŀ\WRqR'Ϗwvppaٍ_4DS_3JgޗF},%Y&]rrU2q(;hTetǺ2p }Q]NV=BfR|3pL }[1hlhzma7hoS;Jkq'}UBPcKY h}2}V3?Uھf!nNj[D{2ɣZj QRIjl O쑒HIe4zj6ن)yⶲ2۾ x4xm_<8U:5`i&XME-†`MoCVͱ67 F`sbg!o%u(&E{tKLF HUEO9Lm8JS5@Ӝ<~i]Zb[ݶÉEp--;Պ!IzU:&ݽcvFؽKdw\Hk4Bc! E-s[RRmH cn:in<1Bۀ^LS!!/\DcdJ;uuUlZ~K62FyG0 Ɋmmtz?gbBb63>FʴYGmBhהe66P]]2p\ݰjĹ,D_Je4UIO F6SI5ޕ1y 0$ƣ `9oǶ7ǖE:Z8b@vAv/BDx}Q,yǻ܃BctA?b!O B94T9qamrd$e"X9,^ct}RBHa2Ksmk|?$bbϪ,*a9ʤ6vr3~º|pb>̼MNӓd9'˱wmIԴ|l>oha sePJsd~-B6av0⛗IN9of6`P&[rg#Q!UJ0/&#ʚ^ Vd<%uYs.aIeX_e))*t28 XEŴδ@H<әUfHIJX'F 6jOcl\*"A J%KMP)ѕc2Wb T N5Nu>#gzLjYQ(_R*f 'JMDhn8%2 )N$aiJE$s B>al?Al6}U $ ڛK|f3ogi^:y}IqЄsxI.ӷ9̪Wُ~=SF];Wl{H,8T߂jztvax>ң91^l-K%B󫭪S|@$t+{i`,Ke&K2,G] -T57f4G:!+uα2/ (C(<)"2Z⑦A,Zvk}g4L̯מlǿv";egn2v904QgD?asvg ϒ0xY';.^34B^ \VD=U>!5hi#>@]]F`]0.z^| s'sرP3>gR.AD*Ͱ66#) eYRP U ٶ<1#D@,U1zskDJ"vK*r{G2O7Ԙ fVV?EH6LȂcwѥG69BMK'--o g(|`}]\MШQ[W+Eؐ`V&LyuۆJӤaV'3wfk+"(| -Ҵ* lWFs-!u~e\cOG^Pc%#^g7?'Q=g3 M+(6 #^)DLpLu2qFl^ Z`#S)N(*;v[ڋI~.oFj-Tܴ!4'f\AQC,S LIM8I%/%38˘,ijpn'1P}g`Janȩ,mHi$¬1qy@B#qBE!J]XreVa bŚwkJ-xOzPEug%9_`E_)Snw2sت]緯NN0:&X+7Ks>)g&o.>[ /\b^VLgEA[ɽ`zPAw Fɴ >AavC N/cw46($gVkAh 웸R"T(〮ErK2#_HYHbէT3,H&R-3̖U?'ϗd 1_Gף/i~X}#i_x3Mg':ޕe8/h6,[qu+Ѯ$ae9O6ZjogI2A\VdHݯ1'w53&SJQǭ1?BCP9LQJ{ Pӗ6n6Q黏.Ms3a. ~\zPͫ΃n+<]h-[dmpSkYQuKiSPBF^ZQnٖ(bό뜆ZG;m& V5ou\mVg&(*F+V[Qg%| _PpQ@@+ ]VɴFs':gL=DpD}P/>gPM'񭌊0ފ2ϲ\#z1aQ\XɣIv-).~^r UJui ~AeZMi ը#tJBLw[ WyK1FrB8i=JX[ sfN.N qA\RGoux<<+[+ӽXOF#$ŴQ? GJ Fb3#a pGN;e}P!Φ1åg6chO͢9W (zho9~sd$e"%+D%7Fgft6J\<ugUpvA{NO.nE)-^lj)}쌖̼dȹEIԠ(DVpJs jc(bX,,' 6:yMENJnrRjTN7-mk݂涗.{3vʒ*Qm5!j Vc}}yqcfկq4Jc^ FsrRAUz1},-yne-I (tx<w?h<԰Np,X>2;FRlp=O5܆p)nj8z߶Ag/!zH۲Og/z>[PpZj9p\j gQst$XE #&l`.0,;*;B4|jr X_;807%Y JkɏqVNW̪nqj#KQ، e$`7@#pԔW&_PMؤs{g;.+>Xe.7MKj=:]n NY蝮P$6(n coBamGBI+޿B@NǾD픤>Qx%f4|?_$}..rmT EVbd.8"˒IjwyQ]MZg?v >5\m  G$Z`'tFۀG-vAMì Lϣ6$䅋hL)9-@̀->;FvYiM'ڐ.;VKhHGOpfsƷry~WNjcƈ9SdA;RKU OcUf鶾ˁ5l֌;Aa>qm9~3w¶gУPwwL: ͞;ٻFry]9J藛뽗fyIo?&Ia'Neʉ:%G"E$:u ,^h̎-^ ?aeUjB`j-̗߈"hkP绋k O?׉,-%ǺhHn>z5oc3%DerTl$ r):ЧB A}'FR!E)XYƽsF8~|R r"fŎKMPI2Y4!di~_Xe0%[@ސ6ꗐ@RUeD5Ƌa9-s >R؜jwٖ%ӲN>+Q N A"'e-Y 3"dv^EQ$Q&ALAEYv8!+Ŷ Kd^F,:\LɸM e:ҋ,pl$I)Q,w,4t:}&$WB}<.>ί/}|;z}W֖'ˍ{߾͂.^Qw}VO;\-^+}L҇oNPv&xoߝwWKVŻ?9OlD׋\?/[i_|WWVg@dAÛ?Ię`%Fv1_z㊗J @<C|BtZ\#9LZp>|$6ːhT;W hH[hZ0rJ&BۡT* I;!oi@mV|T~^w?nɣANmFmz&ga'Գe$?QŬҘ׬َfU停}AS hf };IV'"\ɭHR3K]7f%^*d"RK]trw{XBBo3e`u Goa:T;r̘oʘoyúɓΉI?>R:u8p^Ἳ)' _W_EG~A\B+a7+_ȕ9r={ԍ-Phq&3dIFM`?too}->iRs&Z2%)mN5^EBJG &׍$hS$G˼z_A*a:yGtOO8Wy,~ɲX)b/g:0x)hS4{Q|v=vX<=Yfbb+}s:*s?;EG5Rدj;dބj'/9 ^K P'h WG7Y=Ӕ:3x_K+M/> R+#fa]5%5p!1\f޺\|ׯ^IQ+3iCW5'u o$~984`AE75z*ήaS{ˆA=!uC_xL":-kqd(2< 7S~jON7zI$6Li>RAhk@IɽNB۽ʿG<6RBN4 uOh&mhϬk`ǭÜ^2r jh]m+  X9X &{D]DPP"Ͳ%|̞ͪR^% kɊƝ=gjĈ5cgjrZCvRZ{lJt={M`%I4yjA՜樣׳Zo9vdg2zt3dR>֌3Ùrjd#TLŝm5eW x,vpzv;ߊdĮ;8 5;pZh<KGd5>q@`ݨ3Z[n<!F՚'Ǎq$QٴPhJhR :%'IB¢'pD{|ȑ?vpp v\JnB^axG?_~GPG3fCxܬd9 ]ɎlYonsI3ǬQ]eE(kP)^# S`c0sJ`&MYӵ6Zؘ5rk̈!r]5iυcְ~x~vڳjk)"g:M+FA(iOgG3e83IztIҏ{RWGk?{t c`5? 8;>?l1fy+GvRViCS O ^/}4 K4 KnRE:abv9IIU-Q($CҘ//ɬKAhP)衃o.81.a:td6=Ay jkP{ jweXXV")v*c-\J 9Vg f: :2lծ oA5Im_:_ټfjͫa6f'u+!ecb#GYxc /\/RژvC@>pi(7*T_)Y{$}qQ9fhM)*evA!BxXatN:}QQM P`VIGWE:h,"S 6N y .@B}5o@_\(P k ,utT686Q$B T,( >n`2,is>ăCܽz{?-|Fer!0d*fTc-QT#d )U0@-]F[V &!(>m ,8g"h tx =c-M0'DP2|$aP*&H];W8#5>l'l?oRmx~xUH #3&><`i jkf+Drx]&-cѰUN~qndQ8rc [0̞ndf{.} ;ElTͤ[4a x6_T^iط׌q;8 pF/A|oNO/ӋHv)tPXtVnXqtXAl!=o&r*\b2*M}KpndyeK<`m hn 7q[ Ge#tj%P8(Cp{QOzz .Uyуzet>yVbFhẅ{D)I8u^INN:i)'9 _K$fݛL9 MU-ꑴ%e¬ ̚EY% LN3̒K )ߛ5]ϹeYmHuZ3YL;/cM+kdDt8݆d9$?x`6aSwQZ#]Di7/wКv;Z]ďEoX@}O5)9,fevۢs7˷6ZzCl Tʓ7idFQRIFkP9"y؛~Fqy|!ʔke)1 YCPւqH,xXeB!h靰 %2рJL1u.&[U"uCc-v,܅+BvM" p`w7Afw2Ʊ|3Wd-Kv(uKI2ݬbU;r::%LXg#z`Sp)pAH9w^,JQ:5j{oNoiK7ȐxddAfυ C6qPRqaeIU_}V\ )PU%~Fq זtT;Lh1k@'Ԋ'@ʞAYTfWWBe={r@t[<H-|9"pCs7rt솊w@8nx(\\]XZR`8b* n_Xrեut%:0ɻjZiwVq/t .Vq_b]$#7k$UHXWi_ᡤ>hv$)Nh™dL"C|d9iOKa$ŒHVrZB)Pn(S+$TrD =>m_0uѳ̍rSZZfV(M61f,*Z컕]8_XW3dǢɒ&1JHg!49I-9hΜ=E.Y.jdۃxcN -s"ČL!HdyA<2hmF~=P%іiK^VW~@W+Vĉ1GA&gEcZ.u7_ 8nx(rҨ )Bۼ5mw+3zw)?:Hst[!D7RBjI:qyF Ey5X_nM[;E4g1*NU06#OsJ.?.yni:z*x7Id7wŏi:ztzq/9Q$L] <³bw聫_ŴT N43@h 0Ibbsۻ x| P4`ah@cT4G`PmI9weۀ ѩhY E筰)<vRpT2C,GpS-5Fri'"Pͼ2V4hC*.WA%]"B] U"ݩV7NT3VsJeIJ;$7B(]m9ˠ$3!˘*L1G- > L@bܰZ9n#.āy}d1C$;H ݨYE$"VcG5D%9!KU`2xRzQY!!p(uH9F2>}J` @JoC#5q1! ImwÂd+|v&"$ ,('˒2d J([RgQt&{i'Ɇ![N6Oldo3؉j`yN~{+֪U'U +1vc҅zNzQAbFWdW^4q?هrrՄ8a- I"ON}NR(9 H{kHAh1 9ܼ(LVYp əEN2]N52ɯir;X:{Kؗ׋E);T_~?Z/>X D跳?Lwןr.=}_9JKu^V{Ze xc6hG%TIB G:!]5NtKw #FK$H+8"-IgidnVFwI7f>W]M) ņ+]ZF'.MsqMC7_GǓt}J]ȧv#CfA9a'E.\!Q4*g2B?6DGW z?]5rVBdE0&)e;vN`c$eO[d1e}$21Hs%9-Rُ(!<jsS6y~eܙ("gS)J(|<\O4z5կW׿]b cir&프^ ьJF/nGYN)^M_f9SEfo@N_\|xR3Y4TBF7V RX1(+/4wd7ޭu5ڝ?7ž/KU^UNξ(XaXqyee?e4Mp: v?GuRk 46Z'wܣUFp3Zd)0{3wBYnRT12LGZ/]iϫ.EQim9÷yFK|ߍ G~8ȗN=m 4=oK6~{lev+mm+ ވ1mۗ8ox7b%f%֛g/=s{i`DB姻i&96ux9w6+-ktl#ljT^|??NjOwyG#k8qt5Gmahm9:'c*diKV&6ɟb-ĭnhCrKKkUD,giIؒlbRBb( @N~޿G5Rwh(4Ca(hwE588V/)aemExy97?brhb<}HO!a5!rv8Zq({&[8bh~h|rᄷ;\8osg>}/_q%DPx+bJkC? #Dۊ~\NگHm[_<"^"!G`Byub-hhBo10:X\Le.dd(H=^CCUp.}l|dRpbwwup1;7p1+{ .E]X7dpaHPU4- !ШF4߱l5T#D-bxg^0c6%0Q\Q@#'MWĎ@2b/f}T؋3˛`o((@[Is\-5LvPrC}H!^ 2$)Sw$YMB'eb)DOj(s0*e'ZB*ѱN7Et0[OxN,:FEz2Q։F!K\ڱiuݩ̪S8cfN w\ Za֢g H4y d]p)$dfȀ92eyU^[C`_Y J jtz}u:< Jyu^  Ng'd!Uwo6o]YZI+6H[N5zeX캻RF r>}=Z]}Q;ǜB/u}EhY:*Ee3uy%dmwd5{1̷\ kp}L+-$!_Vɔ>0u͞ZZ ڈN]ۀ[Fjڡ[}CmhL5nun.St)@#aj 獍Syc/ѨZ>1nstY;H9[r݇|"Z%S&,pT[ɷsmٕ2[kׁ^BtOs]:zQvSS~fj-U{&j1hckM gC8}<7Qқ؇|"Z%SM~lݔ>Z ڈN]ېlқu_ҺCBr);ZTݩ9z<(j1h#stɃJ"4"CȖ6y2eeJvdx],%TQ0Z+-ߥ.,دō>N7eme#Fk8i%kPcCwC5=zCVW2Qp 2MJLn 2͊đT*qΧI|G3qQli0#:O0XM q9NӰ"7#Ġȱ<0jiЕl_I?\e?FWVyYfĻP޼՛?՛ܛOXtśojP [O)_c"3/cxJ7P^h4e6%i~Y pewOy@x3s+'|^<ÉDQ5Np& 8p(9jBkʈr5dm=hPa6_r7GeRWa(Yʊ?SI}ۅ!刴XLl񖍹R"+,H6i/gD²XȻ=\~,y%{Éo| y%[BBAl(|4o[3 є*ר ;բʌprV\q쟾M[e},iҊc/3QPQ,k*LbڋvNV'q`6-p67r@/ x0`+q܄/=i9fd {HyWw.&(CIJH/izOl6zԊd{ *>T3_iKr {kYK{E->澜cfϭE0́6A 0CJ)>JM61=sՂ8]BHe޾ŮGQ >qIPvn9ݍk5R3&X1(blFju]6ˑ6ZJX;xW->EDh&/UQO]XFz+$ݠ2/Y\'I3%|q>m6b嵅1 ڢZ%5ŚާY}R]X[E))oϜ1#Z{[s~|d=ƗwS?O(ޟ"l[I,J:A;Ov-v=\رWcGE)el4Mv$4v,g 6hsqE`esF8ZTP`Nr)9?Ɨ^Q~/{%#oSGk b˹Nv?}"npå^k'-[;jjLb *EljcJ ŤR3CY{S%o8EP27EYSʗ]yM@Jڤ2l驃`n#lSZz{-2W0%%2jDb% ,$)uA'0_!+0VF {]joC$'WQM@^Jd8_vؠu t r^O }񽞒=~SvvA}:X_?p<>\!U|#?WF=([jL,h" ss!n'eh` !/oo>S{w*>]g t9"1ߚv˒]J [XbF t~9Ňq1:UZT,7(q1V(R.u"vͣoػZz0k'L5Znh-M.|TŖڬ %߱ &<*8bQ&WШIIrQ( c0M77u-R2KinklGCz5 <_2x0 1js&HXU, QfNR0BՋl $gO,=ےJG 'WlWϊ"n, +Ȕ P|bglӥh3o/֨ƛԤrf. @o֣0&l#Eߨ]mj65ݐig-Jig;xWM>5s,wSf ѭiFMt[rA \tkV4E[|JѭyAjXIP $hvXEW!j1cHS)'G\he HpN 2_uxy}{ [1( |ߨ+F$̂{|.S|j1 q0ybP:ߨnkYGsݚ-n=<+wG>E }V!Q#;XYS`%o8V]OWūP#mNg8ρ>\^^=F)G;Myչ?""8~ =ISJg\cCۣn~+t3?궵yB{/#{/R௙K]<^QšB2綤cG={we|(}r8IH\0X1k˒v3{|!gOŌdd 3$I| $hUzA^q޷MKsѣh/I"z0 W CJ ym%pȈaDEn m\!gZƢdy8S8W̒0X|X8F<~:;3Tp4DpƲ Zfz"chqPYGs71yS֍3+&W߆D%=ی^e׊kѥQ;aҺ]awk"=d/*95:#*g?Y>vE%77ߜ~䟲쿘<\o:7\_q.\h*?~OAO7&\aM3SlbDwyK Jrr1y0I8˷_q}}TaUq)zH‹uA&TQٰp>8u ܄he& B;/bIu0R#N q !JI0&1ėhwJ nzL5Ϧa<9.{Rk&Z1(imFM3_iΖkhj=<+w&R|݌[nN7jۚyVjѭ}0E'eJ%Ӧ H+mK*^CR-$[t0Zr1+}%kÙ d{'x=뻧7q1:1XTrBRW&۪{3]o k/yHx*kPM!gEA૪Tg2sFj ]3'w\t0<9?y$ '+ׂњ ?埬F'k)kxR< [H /?V$O|cYYKF^BXr1/lQϗҹ^Ns2{Tb= /Pٯ֧;TOTy{{=)YV4?7axEa8A1=S) u"@Ak :rﵗщĜZ=V lQEޘB(ff'[jdK<+w>e@R:J:}L~6p(kEwյFռ^ggUR'߭1k )1LRxw?8͜S^X",ቃ8%0O,>01~ۤM aO_("\Qb$V  'D{&hlb^%aw&8tLjy,q̥` L@84q5k '!*gxD]zʺ$I0'š'!v_1̟'i5G " !1ЊqKAp?wVwDN +X\a*:ӈN>3kђk.5c8%7 9Dh&[wùajƂ/z iHE;EH-b3ι^*T9_]Svy͹47M@zxl^= Q'?Ty%Vy:7:Zn_o,?~=>"*wk?؏7񛳫_d=-OM'}qNp֐||"y(G%ܑӲF][ J wmmr6v侠oV$9[z'o6tlT4o=HCh,#heaUM XM3Im7-R^C7z˻\7|oe+{o?AmY+-Db \q)՛wڶ~/wP: -eH 2er9%L̼6Nb0ld+fj;¼ZR| "Hjh0̤ z7)wg(qH%R͚vڄq@d^:wtG `V%O?l? OKw7*=MՖg|FSъH.Lj4GU1w$G *ۦ(=:~K'Ky]oc:q6ۋ5ϓz~ Da K#zJ bk?|`K{EP Ort8ɵp a!KE'hS{<h)nn?e4kw{vsvvabD^S7zM#VL5^껭7mV֛tq״$cx Di J5 &Kݻ4Q_hv_L 㤜C?6|0ɫo?@CIR۷嘥@hOQzF/7E(n~`,&\n.]Q Cg\a} ǚĖM],6໺p}'_O~=I3^r&yc7[Eœ72xnr_Ƽ K^57ql>b4UǗovi9k\kMUY4WF˱0>qނrT~4 WBuޖZ0h+Տm 69W) .6IRi '>4 WuJ!I): ?l@]VpJaRtlv9^ J1Hϝg޺kޅ?i=$Ÿ?k슁kv%´r;.5B.t6IZ^=zL$%,; HFtys3#*W0d:mnt19"7 Lv/pnrùɯ F%]BL]T$ (EpMCzb3;J #{QŔH'; G$ G4-ϤQN]:QsRvCb1f'ggr|g:pÉ6}mIRj4VB鳹Q#"uᇩ͗%xRX͉^y %DL51)z,MgJ%tbAw;5ȮɠZ(ʟMPF$l}/ݞpOu9])wVo-gVJϪU>Њ< =jЦ\W2ȫ|y2t13A3eZfƄQ%DO4Vm~YxJu댶ڽ~<뒐$%ˋy<;_]~?'Db&و?}{y]\EQt^,Z_W/ I%!MqyҎva[ ʬZ+HHUIW8$(AuO_; j.(Qޣ)wyA6d2O+d|za^71&4WTZyQ꺎1p>5 jU5 IRV16; 9g4H0,xa<.GV1S`j9cx?:i X8-<\~jcls/dz6ScHcՓ٬mZ8/EH,V}dpH65|}'˪OmYw휢}Lֆ"ǵ9[osH5Cf28=+Fn^} b@lHs?5ח'̗cʴ(ZBPaP7ZBU+"hLNooQmYh) =Fb8W.΁ z#n6=6A@L vU8L"0&[Gl@$DЃ#ղ2*'P0Uk*V<Mhd#FRXZQdƉQXu`SVZ\ނ,loF/8m`&-N ~%-#fl) t<\b6@B TD Uhgc]SKZ޽m~ׁ` x7%UHR9qrB@М||yNv4\_n6"h/3" Nl L ! '߭ $Q:Uq[CRJN(w@1&FηCӕ x+>8H_^KPA2BJ2m]흵Ֆgl[ ,ɩrs<;IP&؟PPj [HNg$81F~/O,vWRG δ,[QR䚏.YpX;!%ʣSǓ8Pi4R^!TōjM:#jK%z-TF,rȭex#YB/r[XJ@6\^}HBRvA&)y(fg%A$q3UUץ/U+f6>-ysYw!.vx ycE+OE! 绊ž` vxK )` TulMJ*U\Ԃ1Bb55 $g=rZݖPQ/347T} Ĥ:.% {d~N19NzgF>i0 %{M_t˥o=([8}X_MM.b 4~M)5-XU>kϋB,|X"DEqT&9e Xu@-7I uD THUNS΁2,n?*hE1%%h*egMD)+#J3s*D"T.B1}-QC<L^]`z*XԢ"-k頪B%HN~KDs9W!G(WcrQhl9MцZ!"63eJWZ4% & JHD`XRHŽDЏ%f(YQL>ryaQ$i6e#%2Tj`ֈH=Fpa2Y SsޒN1)Qr^C˷\2>>U >4.{cs\j&A"}f? A׈bk{7lo[{sXV/~yF޴-}/n05Z;3"{(y~-HhGw1(׷8M3\΅Y3)q?CVPp{sy~0]^#oz}7?[ P;2!,[ CXt!CHkۈd| {(#6#s*Ftv·j2B"J\~8;z^rw? ||c'ѻ"f":T$Hv*% g *Eeׄ'J-CR1 ư5`xZ/]KNׅz4 sLC= ǒ!5X"U5ifbus&[,g G<1on+=.8JCNeQd65+tѕ ,6Jq*^M@vqBz槷g<^8 ۄfƮᢕWVU)\5;1,rXgq^3*djORZjʊv퉕V}TN BT9D{beD QJJPP<r=TE πMEp^1Wcye1WEƼ]璴ՁeT\x;ZeoKt|IkU^#7n??E:U ^>^őOTMhPTOCv UԖЮDc}g PH9[l`^j2N0% {%%69JXxKJXt6ys֚A2i{_GˣtѴE=*>+ JdWI_":qRO\U{j22KGO-+}93_ Ҥ஻")ZPRAHl(xMЏbM"_;{j*ιۏ0ŸX`oF|j60,p=\P#vk+bנ. s[#4MXyR8Oق/+g*߯~/߭es^O̡ b0N/ 1-TrwGJ}\\k!ty6M >qȴ0R^!$J&[抄mX&BȪSjԎ;M'^ kH{6)C*GhySjw4ԞT1%1صEW|nϸ6ĈqyvM]e$mo#rC_ W>-\QvsLd*{`A mPaN!Q_L7Vtx?R8 WԟNo_MZSJ>y@0EʦBZ +G;+ E~ifx|9ϾILd>}gγ 8&< ?}d)#J8H*g)r !k*CI`ƅ) 'Io>ðKE'+v1jܢ y߆ NS˷Vh# L0Zh KK:|0c︤F% >?ulof˴Ֆ&j _h,IP(ᨘ@&TpO$/rO,Zal+kP30bǠsv)4jB1F)'T7:AoUs".*#hd, UwIYE5 yɰD؞jd;cLq$0D%YB#HІEGnq jMZS52[j"$ID1͉U Y0$%k f KshDJk@Z\j )h贒(XN#B+Hٜr7ngy֒ůMwzI3}Dsw]>|}<}#8n_[?~|uh<ޗGxNUsiǓo;sp;U! Uo^(Ŏ#h.~WKJ08͎J2 ;K)G=x<~̥-V̤ꡊ3k"TlOqPTj=ϖfkd6.ͳ8G7 jF& 2}LM?:bqc;xyWȮE^y]ȼΧ8AgKTeB=PC%|]F 2 O.u? G;wv~Dt/wXA.ЎHervEYR;.q0e} }郴@0K*yJTy\ Li>QLRt]! ͙ׄ-/jݽ%S|ՄJ]ZP~$zkԀ|U-scםd5hj}4`ҹq6Wy]t43=?nhucOپogk%zqͨQž*H >)'[[Y9*Uvі5BŁI"J=6o+dȜd}Ɓy`Ht?D(-A8U +yY-:pP b\ SJC^\I"ONJC2H5xc"I)<6E8cH>IXX® bz]{xZ4wݵ6ڨ&~*=,bqyǃ^x^[Q.Mg\(5\Av[^P#+GtR ~ 9wn#eВ;3sC*s7 7+--,/gwH@mŖdKrٿ3z43[=#OvwWox\(qW!w $,$'x-mPvKOB+WeFS4xgǟ/ jbv8240a3[kBELN mvmJ1!p3e"(S܃=!Ae$ љR%)6-d%/(t-s 9b :e3R%24Ү!j U^& S*H-2Y*,޶>)~yC S -lWr@mGbKA fJLuk-˗8y)XɲF0j6eDEyq[ k+-3U`P*yo{;dv+IJ=(Ag詯n%4AvZNeёћd]e艟Dt(ƭ=8ml!3<}#Hvg؎q8?L|+O߃x(=ӷ|/_r\gF2hZ]_^C:6q>=}|ۏ'7~rG"3`#o~?:n> .8#pXKgFT[Urp%v.)/Kg,:$\}$Gѯyip kvQm՛rW#lejЫ8z@{`pc0},|a[F[ggB׾UT9[<;Zn:Y B;ݻ|W7`q ޏUq %$Ү>;'sh ʰ,SjްiR❣[;$Vf coǝ-_+'j/s o+tfQ4.Oʞ w?*tyyMWE*z/^û;-pA3:p r K%rnՠ+5ZgoSc Qi$nhZ,Y]_5+3k>:J¦k'RŤMPH'5ΦUTݻ\y~'@fyPl`#x)?!8{~j䱦9HƑDluB.œtѭNeo!$5`H _s*k L-WcU$TM Zٿt@85SԀQԞl>?luG*#8ueURӃC=,W-V[=G[=jibǘes.˪bR6r܉o2fS  TwS.ZU]bulHw #kPH1z^235<.y\qW83-DRhlHRp+gʲEm( }NFLFC?4hЏ?[QQQ?<㩱W5g*{5T$N5ecihvhD0֛RԑLo?Ҷͤ:Ъ?~mK?}eN%ԅ/\k_`4svj< /S BjAl8zэ5&59 pɿz#iO#|!+ K89ޒ-Nr$gduj<ˬ[ w'rn'rh*R+g׉5( rDȲvl vtzID}hֈ4]Z,q闙#g wFI迩U~6NJi)8,WM:||D4 M7n1_(G韫ၮ[qNf1ʁsDDxlEX$x 7XmՋE fƢ8Jl4J4(8ְhv8T󍾍O~ G ~J6|媘Es @UɃkyȈk?N?`aZCu$AizC5tA~|l:l %U\Čxz,EStt OB!ibF}7b|\qw$D"6y@ Lθ lWֵ!E"}&I Р> p&eSd=C-Ho|q]u$Bi,(>`!+CbRY"H,@O!EVǡC?@>L\#ЭHnhuI2+Zهf##XV2`)ȍ;^m&z@×nYѰ ӕ\~ң\zBB)brfjK$Jloln8_w^ž}[Ty柶jspjXV9a]W1=g*En벊]5r5]ϜfsU&jmmФ`9uVf8~U MJqe943'^MYYfN L6lJDRjFoAdgwVD604QӋBs&4E^f 3Ʋ8U.f 3b.!Р[8.U ׯtp\RЖ|.$N4zBژMt&R UaqKG$ H+(y9 9Ebs~⿋,]?*t?GrSl<Ըni77O›O)7ųvw v[RxH䇥,)|k('fQ#}Hљ =Ƃh6tHP z7Q>I;=-8@ 3%&EbSBV ɝ  GVre9Ʊ@2-9@r?z+ߑ).5F)1AAٴQ=&D#mU)*5/-53 \LARJ)b&蔡>ɂvQjFl0;{\4Pbi<|>dO\٤:F,_):wQe6|c ԫsa}|qTB'WF Mx#WTzKVknz+ ޮw#\Q%KΆ:ԳJ ڎd~#l`a^[.O/Bf'{RZN./?tqiٱQ;Vq"#J!cn U 6^<27WscL/fHϗ=3pf7GJ\#X1H}KO\Ӄn AՂk`Dbwa17OO}l~OyU;$|zpGO3BiFPx0n.Fhg%́nzHY$ (jP,Ҧ !'ZݺFYRj:eڕ}T4DВrR!k[I())3&@dQڤ Er:Ym+iQ:sF ^ *Bkìs=X +#o<᥆ >t%T)StN8GXk͔t Ҕܷ5=5VDHq֛b+}*F6ZgHҳd 6I',Vp/4.mY|" JB%/ VײhBhF8%l 1t]`)W 4 {Uهz V"c6(b.۔caJ@Rd8rrGF: jUi9꘍JDLC$n+y0""CtRfɪ2Y)frc|G.=sIa]9~,;TXZ"R]$=KB3rRSjE(ɟt3<|JoT8 'e]?BB *[wl{ Su/`GCa[)?_=?D^!Ԏ9 ;/ā3+]+2R93^vp\R f>N%mtQ%恗#9`mmXuNL+98y)jQz(9Rr\w6;gCO˫܄K/ ~=zփV m^~ߟ1)[ [>8N˲_rQr.L!S~|l6 xs-^pF-'h JuDJa2`Y +J;p;1rAQw1A]l}P#nX?ԃlU\Iz~TÆ93 %-f [_NbI(UTj.8-XHLQqrW߯1EJ`ި(ɮ-q8h<CnS.urAKaU a9k厃cPGKV˪b:Di~:s3l7<5FoLe6-zS;t%~gGB>z6DD!XI |7wW_if xUs]nd!-{v˛'.n?LV_~n.n q5ZCm^~w)uޣҮFBi"SbUWo@]|:xU|2Dhorͬ`_2Qx6(tt{{?8ܽs߶+AIA;!"\z`'w]g?_-WKٜldqFXuq/\TVr٥(b`cUs3X?ffiT[(ܦ0n'zWoNNcBVPuve;f">6D#Jݎ'$Řb]YGT0ֱyXJvLkJHmY84ACz1-ae'Q''x D1pOϢR*zw]6"GBIJ45~JW.V3ۭ[-D{ڸ<tПZ$iH׎$Pʲ8vZCQNC=_|YG=+]`VTD"91אџZ'xhK0.Ǵ5Vkـ B49i(6 M:EAi9u &=3RϲDާ:u8uNSDEu͞4|=$>t[-KMŭIKV}2[Jl' D% H%bQ<&lcf ۸sq::FcNiFC@hّ. +a֣m8||xȣzO#rt3^jBᚓw=$D=Pᾋ-z4?DQ;J#<<>PCo{UI~Ew]WLRvNlDS*WO>oJ%hT+X$[FWO> + ZIbg^@@A Anr ) 4KpEg(&8 :蛫ЅZ֪}>>/aØnS1Ej]Z5V6H?VoADUQǪ5'Qz.Ț{p ؟ʒBRY fU6UXݩMӣUQO5O,x5՜<^`# ⏺n(g;tQMQa #ݓ18jh-hvO0'E#cr\'ϓi9B]׶ٽ'`HrvφZۻK+VVҌ- @~ہΟV5a mѿao/]L¿?VtI\~C֣Yi.6W'j$*|ARtx\ޒ *";M&6:\:f{ۙ&4e[Zk)(?FeoA))13~1+#wqu_vb]/t TTW'3:O1{\.qOu> WXZ2"ZNX# Qq#19V88`B0KF<̥tω 8Q zf<<;9RyUh=P^:X.%E 9CpTe`T@ޗJʰ\k)AJt}Q7 IDQ-I*Ub$l7kAbFG~(G !dFʍGɎYDClBZY*uD[D^l/2~OR%Jlp!ae^Iͬ}|ncafjG1Di>1$QyPA`'#iO3b4?p r55 Uu|:DBS)ȸx%fF'K; Ԕ+WJXA7¡SɵtYKMYLG^ 2\j1= I-Ħ%BsU2HaDL=_G8{&73fKO: , 0h]M}A Ĭ)F:T_PyKqAa-0)B{Xa qI\Diʃ#yйR؂z/KUuA&QkFe *fQq^YcIs0jՁ_"- B+e ))Qf>& H0ؠĎ990E ?? 9Xp_&q;95b0X-9`[l 8&Y)$p.~ɚߜsj#83OI!(, `PrT3f5 LX#K/DD!ّ0u-=̓YZ+ :6ķgdn8[.uqBR/~[{"'vu :"ǻJ0I8a'}~T{zۅ>y\^Dij|I7˵.>^\> Bo߹07bs-ۓӿ8 NjcKqR)rF:~1b>2^E|H06bU`NbX<Ҵ45wrxE=|OvXD{ g˱PS+n(N;wV7$>:(l(:H=g+p*7Gӣ@V5wSYѻlJ#ǰWHn 1.\&bзso.4qIQO'ur$vPGp B'i-͚AПiTeVr-6IY]3[ӣwAof>J ~zD]x!n׷rrym{3x'3ߗ>'~__tۋl/|f Q`z/nGj's-  بP+-q( XL`xF LFW|DUU"!P s.4IjEplb`a޻agJM1w:)J.~#R& 7&)kab2dJZpjF,Rh#CmBpb1-%j(aSEP _jŢLfHTl]DR+C+Su  -/ c"DeIK-" 7q vH9J+T2Sޚ2DBh;!u 4 ha,.Vs@R#^[ { T!!%JTA}ߠI;^Η#J?U'LT30OY$q6B da`DB+SZ(*\imo[Fbl/ /]I bcdȹqF{!F\au#9ˣU?T?.v0@r A?-ͯ~S1ζ cfGsoSNNK 0M9W9=e5I LBJ})-&01R a1c4āK֞udEpkRxi< PdNKeJ;XRpЉG/TQ?RI 5l+ "vQ:Y{7ǢU\V4_BHU 6W6lS@`k7J1s !k '"t/!> ܌hqmW1œNj]8cn&%43Ŋ!=0YɇMI,[-r}EX-_#?Ono'l~ryAE> TVYO4}ѧ@W}>Av8WP>!^?\ӿߺjlIz?+7UFW.T =vR4t':a"hdLVUBin>,)t2E.Q=nGt'%ӌ˹D0a(Yf g`9(qSFMZ9 R`;,hQ erkf퐬wZ2@qʘ$x飗?ry[~kRħ1^ýώؓVFP9[_n]] r70Ɲ'redI [R"*U -of}8ߑ7+\K71!n?U>Z僭U>ZoןtF&):V B$bi:$iq*qKaX2Ǽ֩"{>s5vjݛm Q^l;c!z1>?xk˺)JQ%g{?l™; 2 **-$ u)jtqI%l8R2:TFQApimtS/봬nR:iF>ki6/yXй5nnGEd0!qSBjƀ[=j)EnQ.7b%inUV귎1\5?j)% %JꢕvSl$PM!Wg$~DH9GD-<B5Tik! (AzO|ʩ՟k=eܶ7\ak^߳mMO?&j?>]n3)uvh"oу⿾{f~a !߹7·KeK^7 Q%oOHS@qK2F4LRFzԪps#g-詍.%Dih樁igsT_{`}d0Q WM'hGBea*RႡua^<̫,,_> Ub1YcVia =cZW{qaXqAx֨〱cti^X(u'lʖ;ΘsE ֈ 1 !@!#mM55|>%bQFUt0A0SqM)J=}{<(y[1{ߡY` Q_;-\2ksYҎtc.+H?850Ɂ*SJ0*jȋ/;>xj%oc c_vKоxC :Q OtOd&b Z(,vr/Vg*svmb|RQ F*r6[H%s^r}t|S\fOSfl9{>C"Grm9[˪ɉ˩bs^%@b:) ( F%V㻅-hKͯ[D1Lr|XD,.Vyz[dELcbY6@hd"J|"w"TJ@a ~Z캂Tbr)Ɉ#$JS>5+frwA*!r!\-+QȅlTEOS% ҂30 V\ѣu.Z0B*dX2݄R=,Iυ&Z;=('FA>F֑QI)2>s?߾Ymtq577o{;JIѪ]<{z;8vN1PZ[Qe(HqpJ;㈪U)<{n"xKr|aA+bTX$7/^dt— ;*ϭWěYΧ.Rml5Z0fTW;KKx匳T0-᱖ EQ:W k^U -7tR)Q#E3)ol4ojZ|+ JzQyxf7󴗲6uv7EpR+5O'5doսyw&6|̲mtm o-kj#+fvB#ItLGW Nc_A59t]~+Bq!Y!D=ZMJuceZSU>eZO@3\.(,%&D}Vq" {ʭJ{BZj~l  tV.P]qq,wjw*UVK32B, KA^0\$xBWCРCF)|4$ͭ,͒ `p.%J G&h,RcIsbPcf*Du)*]CNq"cIphV{ɥMRD!pƴJID$QHk"RO,P1yNX 2PԖ@>j¨EIK<=!8RD8bXR #Hɂem̜.%VQᄴT91 C 2hI|`Rv$j&iZH'HDlr I[V |nV>p8%.A ‚qk.z@ͅTYgH{tR-,VX3UIZ5OE?3,L|4Br+=|^/L[2&#%Cz0*{ T7 ޠp,fl֌J:x!"W q8\DTh\'aħiIfގY!ar.+(rŽ`FnWFIM)0Ħ(-Fa"CF1SE$COy~PGpKK (;O.<`(ؔ,b] SgtR3I &X@WTHdBɜ&g}{NxEpӔWOխ޹㨘l.g5"_C|*Q{ n:^鯷CEU@\~xb1ƸO+"c/d /A(_tve8[NPĜf3R 9SRYhר@_RR_5\@bkḬ϶'#6u ?u?nnJp49Qh diwN8$J;F9ߧH۳%~vN CBw㤿qAcfQ'pPĦAt$ E)uF?H`JzV9x}2s氎3B.5 O3IJv5R Gr5}!M,Qw)2c "*R)f %2!M,I8+'˜T͈% ȱs=e,IBtj1;uh*U_!b RZQd !M.1G8*-{(S]TDgUZ+5%$i6(-MыƆx <7Z,a1W\yRsv-,$PBFHC}zgs]$rdz:&2*ТԊ`Z3pXU`vzjWyڇϚSb^Ha;ň*8Q [>ҨfI3IEG3\ZG|.YܨEŧE >Eo =`)'a&jR=A\CŬcsP׽S} QAu}򂠒PS ?cԺ)JiOa E{_j7H9*w TBԅt"_ǨK3"XĬ $xbB)q}Zx¢*A٪gښ۸_ˉk3W!k{Wg_RaĵD*$eI %HFF7uTd[ K I ۗF9+ 3D Ce9 :bA;8 \xc'`;3v\HXgU$$ zB4!îE!#g#' @O"ڎq-">YτE9EZO2VcfZI4E+4"P4{LГXB9q 90QēRP$4m (R ^uZjvk< {uxk@קk1{j})'pǸЊmGhȵ;ͻz?roӨQGԢznp/ŅI 'Jq\)/`V/1zT`?J)6|}S0 o X/fZh XT܋nԵ{,8>`4y+wkڐxqWFQ2@kHH[7bÑZhؾ!A O)#2L0TsvW\v$3)qPdQPsK?0lD)Dޗo6GNHm 0E);lƲ(*8ڼmВ0W\{ץj5-N6]L9$/(}WiuaF4 I.됄MȝѭB#LF zD3㉹YbJq1;xW=T{E҉XۺXk "byſ/ZE1ff5&w8<ٛś"b=ď.1_z[ V9 J=bc6:$yw~u:|ߧs~OVyezhik%ѥ$zÕ~%*߆ŨsKq=H-~d8֖,]D}?O1}<ؑ9uf(sP&ܙ c"4[w׿6۫ @;0=1]aРcRX2XFV9$T{i.p\fv)|bC32[u}kJsUTjrk e(z'Oo/WbZ)hU8%2X$e4JAiN s[gKcD[CPEU+raPwM @ }P6_/HݤgK[ ӫ竿{ nig;:wEsoeDtדъJ>cs#\9A,-]JKҚ1cQIW{|;]?D(f&"U~G1xG*(8Z_3'u_%6'n+ѥn%fku`cxR*ᶉƴQUfQzmaА !H#9` zmt;k;Xz6z.߄RM4oٚsX ՚Q }҄^(suz5r=ZSܲWFTG 'j0kWV`h0%磅Q3nY<ҏ1iAwTPqo\KuH(6k%BB^:d3;q0jCD=<^Ttnxs orKy7}Œ%AHKOCŒO D¥mm`**0D_bT=sPݣ!RxL{4#@Z/J 4wTqX+nV0lQ_XSYDD`cH;wynO5Œ='6 am>T׶3(ʭG3Z{R6w>˵9JX&^h5 E<#:tԌw~_JRL&C=O>77t<16zW]=Q4y|T3阭:̑`ƣnL54f hB#婘h)1VOz(`>E p2Czec:"˅}5؁v }eف߸ʄPH 6&g"uki1kly΢ ce{ Fs"88J{/ V8'[Liv%EbOnj*XGbJ2a3˘:LSmѵ*773DQ]Jx, 4aaZ;>}E2U^^\9-W ~Y"|sgtl_V,.BpJ_6-Kø*1tvQmba=]vJ5ǾPFTwJ/"@=^Dx"饜K'pQ$#OaH*R\D}(^ER)Oo[ҏgr֤'ޕWbQ}+3+=3 ;Ӧaxȫ) 狯UPvHV(ŤwɊ@:fa=f,J L .~&V>DUC޽a춿&JP4;,k8]O~X]bp.bzz:OO/t2RI58u# 믯1=%KLYiJHƓf59<>;Jp5c^iA˕c:M5yW.w=x~}l)agy.42ǰlj=b#_ĤdP1Bfx]}12 fJJeL9m]8L˵w8P'LLl$ٯ &MKR!JX )ʸeq֌W}Ev1iA6F^l9pz-C;jW=Mx wy#HD@T"*qqOk`뢇/%H mbfaS4>j_1w&qu O y&IYafSc*X!uLq˥U&5.wXaEDHa^J5Ψf/9F2*]qԘa~pPU+ad7QV~oRC>N"b W:#wAHp6͋b_'7ZLO)4,GL랜 S||\g_}.Bǟ +u{kQpU,B#L)ya[Q=%p]dށ4T}um1CjIE hsnz/cF0e_5FwjGqv)yYX7&YvY< ?^ӌ 5{[<ʆwVPZAit,DQd>@_-L/OVg>^k,ړQc*.lT'JoO~aӔ(@I8䒞 0e#D ^SR&Wn?+$R+Ci7/vo+4JU_ӛ@i5WH,׿/guS: m1Xt++âފ޿B/ ^>Ne9S'A_ūSE%߹r0o 5],_L%ێ$OxGzw$GaX#jN"}n”0SM J5xY8m6k|a*aC6ָrRcMVj/0(E/feϷ]DIaS|YO1͂Aߤu$MOB$MOqb:OYFb\L I2M @s3ɤ͍pN 7+r=_a> QDyY*3ݮL<X0WÊ,AuVO{o~mU1xzތxy!ةOL]35SU.}[B슇̛[ˎ?q 0\mOǵYu wq_we=rH4bR]5kOY\Ua1}#꺘IfV$ʪ,&`DmpTPөM6GgMǺˏ1lCtUn˵zO}1-e18Ӹvwg"4@oD$ \0kچΝ1ǽWjjP5V+h y >găPYN\Ќ4O4:3FAfV&>zAbtj\IX(-=A|Dk>1S0j\㊥"*89#oڐ~cnHQt>РBA=XWV̂j3aK 63&Dwd 9q}F}NDVRNtRnY6zSR&H1% j#&ye$;d4|N`^TID=D̵\r'K>F"7o_(ew2"9^˽`)r:K"Hn ={#Dp D!hx{SSKd֊"EvhÜSȕTA‚4(^pQ&CBڛ6 !*iC)Xрߩ\hZƵF^,;>JփF$$fu:!C:". 6A pd K:UeUW0yRm `wjEY&F}ja)PÇD|$Gc@jt>jwT2*)"Z"n]z\!GGbWR\R_ #㐡p&'Q@IBl{hBTq:\^t;W8ax 19^- |I|=p=Nً.ՏlѩObkkBM(@ZpִϵGyc~&0Tvze_1FNgϸF3b١u ˟+d'?KY]H̷=v(c%Wd8$΄djꢕ .X}Ifmu( i8 E#i)AaZ-sn^B1pŭ`.HTM$q(DF3`֊q1ǻ4lD)&[ȻY gPAVpx;z/o-.ت/T/\D% 7NgBӣEwjfOF͉L(CG95Rxn|@/דA.̇<' 9s@x!Ɠ͏n1oWShoJm?g6f@gag(҅z04e̼lYOٙ G2TzPS>: -,̫s¬gBn:GnC6OOӨׇLn6G7OOaOczJ}.m@^+2y3B %v#>g K59gg/pnҭt.׆sMyFHo_ǃDn\7 n 02N!Xw L nD0d͑p^C]<S^0<>6ñs^y0EnSG|Z`g yAjWl*W/I#GbJLQzУ"x =)ěA8 HRv&O":XKY eu<cnkgu@!9@:|Iufmױ%G l xuԄ79.oaO  (243*"FeۼK=u=m]_sr ,K}kKxǜřњLf!{g 蔥4=j^(PZ pڬ|u0P5C5 y$1#׊0tD7eg9?*l̼͒df=<;Ԥ# 6OOӨ׃A'71=G9*K>-רOpk:yNn)m:Q^)3&\Upr5Ӊ(p@`IԨ(hZ=Tw3Q1b-Yӝ1ID=29hPT};ΑU%GV4us =_:>CS!iA";m6C 眇'OBTOw'giL@`g^XoD:*PR3 6y-$#NjFG]zG3!F@oNI@Zq9KLC$L]]';e'ewFE6:_jg0(wB|g3Ȳ\+ƥ{粽|y#ϲ7Ә{J9Lu7?s2xn;<1;܂sCޞ(LkJpd7\-EJD{Q6R=M;|4HkAD7ř'E3K<3{!X8ykжa#UЖDA8N1j!AEJïT ke 8ӫ~5zpJ(K N摕ſ04q"Ce[bMRD VH"%+S䠹zxL<2"hKOqpB(ZjĘZgΑ{2 M1 ."%KA)NCD:͍:.WȼE@qQ׋+"ΨGCXEQHKE@QY!>$t@0-aFeC@I沄 q{:_:b`Z۱& Eqcj & ĸ֙(#糦9:cGR r=OG颖'HVHVCBp-)I݊Kve6tv f\WF|֚ Mum7&* z6yZ^LJu{+>.WqqD|mrus{ßS[XqOm| }~(MшQ[݆QQ.:H8 >zՒ;7wE}W͢_=&ogՃ: 2|PgdzoEcq DKG|\ mA ?pAκ~S ^ ϧ>{E~Ș1৕lXŶ%K*㟂 jw.2t؆-W1&*\|?Յ].?~׫sRNdM=#.>Z"p sJ0+0(o8)dy(à]8Cg"e:p3>.!3>e U݂#:9%{Ī:xOPg!P NRY1 wf@扪<PӍClMDFl]ӧ2Rx^0L YaT;msk,('$P NuroS󾸹ğ|ӑ//{tN*x#1hj 7Ai#!t!yͺ A)ީ>1:wbh/?J(8 ǁ4IJF4rޑڮ{6˕.Q9pKXNKm_2'ZP% l\p2s{:,bWmv.qwa޵m,З6Iܢ@{ .زG YR)YZ( [ȝ΋'ydup Ԫå3ڤrޤPسbsIwU"49*U`;%D}* MSC6 2Bi(Ɲx=-De< ]; `QCShU\Ϝ N'7_Q&=[#v *Gv wpEw1Qܕ8&gH{rp8!|.oƭO)$q6pat4#La Ұr\kLkT*ؓ0SFݹ?,nKYT8|(P _ Ji 2x;)SQ}>^@yd#F*`b-~`1S >@ :\KSIJkzO9J?Q9~{]: NF5 0ƸVa`&$β˧2aE' AiQ3<䣴gB( Jnw|7x*ܒfWF_q 7ޫI5npXUkpK?^B0vIm{i M\ɩ`6{^ؤc>"6ط SԅYhx ?ыy $*g5nW u)qFS% ib]PC*K V gSpJ(ۊhN!s7'-gS*ԓ:ړ(VQl< ]܊Kl*g$_//Etqq-:GM]տ;\S|JȞ9HˍVJn^?ʅs\6%O 2IN|7ŻV`iMKή& 7\ˈ "o,_jq?ѻ/ӟFxGJrh@}/QtV4H)tvP Vu)~Ea{8jm߂3zcTRC}ZK͠))i?ƪ%w7*nb,_ѯŎ j=C2/`˙u / &SNL OEl ?'F1h:mbBBTSlQklMwxM'Pd5o[L)Dڮc-VRI0&,J>I 5,Kc)U$%`' Kd Ki 8MRf(r=| i,StF;1{`P8QC)i Ҽh X2ΝY˛˞NjYweG!פU2;h}VynyrǫcNI>4Ayp >{uFlXzyy Fa7B$[e<_:Nsb؛t CCUJ3 C|PF% WL'AL]2R씡 dH[0FHh n ˓fW[>zB %eHyʐG RVMOV|p9sh>zW(dYCȘ c E |ƙA4U UIbA-I"djqdSJ#^N[Y0/AgZKmLr S v|JM1T00 51 *n}I=.N`!z hqwat(уeވwgF'j-Cpg7ݭo=hqwaԄ(G~r4md|_Y"DVeM"!K>n%t)%<7ZqCGč2׿z(s$96jgNM2!2|pXlc,!E^pݕTQM޵Y]h2u_a~<.J]qaXZ4dЊt0']>y)LL]q#jTGXzS.\ .8)|zſ}\F%`-Q|B%”+\u5J!C NNPl Yc}KcW iX.8ZFߢl0)"UF AJj:)~/X3#u |Ϫ΁npԿ{jsc# z e闗/u=rM2uCr)}kgJgr\ȰRث|_- | $@xnW| T{=WLr}_:@yݹMd>\L@휔q넨bIMvA@R=Àꔠp1T'\S478ǁ@uDQR)A;*ױV+2܌H.}ݐrrY L% gRMj zlbw,|]>zkU'bR)B>5(º>b]QDki}p(Z{)*M}z$RԂҦ*_ 9,]"p>ѹ^. guT;H<|uv$|uLtwX_pń<ПK뗿=ԫ=vb_;; b&gw27(T"MxIdڔb،`H1~ Y\-&xm^%5LlvgoI y>KWB(Ui>qȤpݐa+xħ| sܴJl`\B-@!p|fw#Ax8n;`')4f2S񤇌\l} s(:+Bkŀ6m`6j޺l򪖟WhaGJɲ^u7ƿcEYxC xa֋4#6w]νC4B[]vl;5^5r vTdgGT-Z)i5~g(L]q\ahSՂXszjO,ch;;P R3{HT5&u@΄AD AHSIrDYa) ⫴oW(5YKb4M `F P\9L\>/885`&4R9Os Dsn)oOex$bR&]w=~YbՃ5|a;s|InkùDӼܦTr61Wca ֋N0Mf5cG6?9>#87aD̋]JA!z13uldo2Ph>m9,"&84zFzd.m:bOdn*.򪨉KX-/'g9߼$c˱ٗ #i:x*ym1_g6H6aPIO ֬nW.Q2lCuok7SGĎ!CQw]Ep`wWݫ2=bXrGË #ņlrpVN1.+K$}ve>/fDXT pt, )+0T*'93bY>͞&\J` PYay-s(Bhѣc^.fTּ#h`w=~u{l<Χ3RD]v# kP6 u'WlC)KX㦰cw*X qfaEC;xś գÏ!Ϧt^i)6жǰ5\pS 'p_ 3'o ͸#~g ڶNov1 _p2J4%!J%n$| ž -|mt91Vտ`QQ]tAaQAdNW̬9Nʉͯʉͯ7d3"E*e# g:Rb!Ur-!FE&ERl?|_ }6e"c-F*ffv1i=bM$F-S[5׍*t|{g ool>'_.;wM]fe1ӓwvv(B*CEA(`_\S# 0#IJ@!3'GȹxBerLlEϏgHp7;܉Ý;܉ÝwG0<ʋ`$I(-d•"7 '( ),I* )IR};PԍGR/j6|J *CAU H$( Ay"ֹ<4A/Բdm6H*S 69&eTH4EMKD DRmH53T4MTLWɛhTASHp/-X"]JʣId 44-aCfmp !>Q3P@Gnkp7w7,~[lS?(k, *R%L*W? ??qx Fk!#. -H=} ň egvA V2"# AgDAP^)l/Y\RV)z1ؖLV ;^J$`1b;׻kՈ1!V|aDu)NNۮ7+;^hy+K"GHFd?Z};RcuR$ [k\KqKl[DqP\:{)<jꃕqv KK/c{|޿9@ n!0O 6"R^A3"mԈHQSrѦ45FR^.`"拓QQaA'!%WpԋSD)KQMғjy)si0!Vx]ϊbƕx:939t_V/}ʘ׌Lyb95Ȼ8G O_٢0v];DLK#2SN0Hl5lK!sxx* MWΈ`ٻ?oviǫUýU}g?KSonnG^|:pp9>5ѓ/G՗?%WXq(DIrgÃѺ%5jYΗn2b 9JYwΗn2z(Cn"9ig:_]kT!y{r#L>V[u~dGۛm^fgfCr>%ngznuEϾco(HBVѼqp:uH 5@Ds=xM0(~Aw0!F6+&5hW7|FahvEL7~a>''ڴD.br)T<.hF&N-2Y`WM2.0Йb5pPk| vl7('<8i3.P\Ԫgm{UOAb2*3AP-Dve&Y[h״L.ϒ.gIfF04r{xb]^n٨l:={&G5)A[lսp:ExDCt{z^veLdP9\Q_h@ٛzNવ 0 3wtǑ4^\>_qSB) ԝ:J;dc^s/k2䌷 /L/Ž /G}*B|5E h[TӁэ!SR_Dӝn2& ਠSBŐXx~J=񟂠MH.L|5EjQиLNaRwl?dҹ-0u=LaCENh2Y$))Q хR3)$e:IpNyw؁h$`dF~w~S?4RAl1>G 'cN=3{fI$rbM~; WQ,wetZ dXEsډv_k5 AidAmZa*ZbU*x1^8+ºb} H QzfUw w1)1&CjOĖ8u3c* +oK[ֵyx`Ҧ/d|hrm˝%@rξن /#Ol.BVnFU<gWyURԆխ/~oU`1P[Gt~?S*/Z}Tu9( |Yp USZ/\jڐr)X "11[bS'u&W.Q2xk$FWb%ٻ6rc%K(>T[7z[~kN\ڛz{ f$`0%\Fvs2Y<ݹo.m&n O5BvXIFyWD=^D;S=0Y=>86fZC]Q-=(1|<idNDgIGۤ y"$S^'?nYuznN>ڭ}V ;n[E4D0v(vO|+~,PӞ&_ﮙGe;uxL$fcf8RdDP6{qk;޹{`8pFXn91g;xv?#xpW*h o;.핃5]V5cոJWQS.T!4cvfRlP.z-ǔtmdn>;]s/޶-}2$7 8]7ܪ\WtW߲pN[=Bq+Bqz5oEF~KY~-wP<: xGyrO\w<9-}A],!DyUoO8^7'PWoRc3̵ݐr|JMPz hC& (Q&de. #onlc|Lgml2.Di|˜ N.cLL6 bC?܀xuh~93k><2ڌ84z՟&\%l1le-%/eHI i7VtCf[SiR%jhSdJE hf@*9`Gr#R3JXfDIDr7㟧v[$1Xc-ESɒgX^Fl1/BxS1sIs! Z6Cr*מ'_v*&m1g9I 4M"Y݂d1>O"H 3gb@AN XxUwpN,QXY⻯kh 1An #ԁуz.3StHkomDqbDqbDqbDqbm(J%GEJr!U%d2Nr ֝iP4W$Vjd^[%1GH4JEkQ#I K(Y J ? TA=^4#S(q$5#Y"sr 6)4IhLk#!D,u֡?|'bG24IAF@XJ-iW1LK#S>ƩTc&u2V1e8cDd 7nl~)wb/A!':1鹴gf]GyE',ZbIysC":tȓB[qsrƊBatE&O{#9Ծ6<׆T7n.fTSvlTs q1ǬVC (u&F;Lv0ժp 4fY%)&DbhJT|yNY.rDrsGEiFU8g:xm1;)`XD.KH*!Hsp#̵c*(QL\s1CqT0X,0i 4bĪm|0 +?U1d,~^tk8ӤjSՇxN5;RUĪXIqBFu+H59N[~Y`a}.ܧ *vV~ 2꘸FIyZP; C|~'5݆Wg7{e갹\zy:34>rrvyq9%RI.~>[-r¾ vK+k㛛)1"_EEӥ9lS. V%<˩D +<i+ r5m߹,I#K|0d!#@2U繨O,kT`86dokix:F[i|}+`S"R 5n .CVj]Ňw'߯ѝꏾ^b9ԄZdYӝ3عEQ2CZsu*&t:{33PpW^%f~ӍY* XЮMCMq`X7{ϼ/W|aq ]8NcCz(msElqf;ZIaND4ht=lL5|ؓePnˋ*n/wEQ=5BDK.7N׊2QU)J {1A]]/{M2NK}e*?vdS"OGep b"t-M/Aw *iHhj4lnBw߽tU"qI.ҀP,\6##6]U[Ysfdr. ϯdu-Ĭ4]>8llCѝ9ϳb d0q<]8J+x,Ոc|v80u=/xzw1g3X7Y<>_Y J=OVPf4O7mzS0R>{5oٛ3`*I <)uh ?V&mp{dk}xy~g "eƙ!mnzPk=`#oDZ,GfhUn)n.(N@$䀡(&%ĩ)—'\X`D>iʰb4ʪ[३E/dQqy'_gadgzwܜcȐ3[٤v6Ns7W2`PCmQBr"= c rņǹ\ [i5T@ev}uZ?jVpUZ}1S):{Zug=.= YA[qT_;8Z=*'G^XwV3GWOw$聣 }wH5<®Shjm։+6pnG;gl-X)8{LvN1v:.lQ(:],Uj|̃js'Xd eN 'ߖǽΖ+W'W.!2E,_P[(.:v <=1ݦv avnMH+ nyib}5Vqv|twkjp8GqNXp.|}1VleyXg>ePD0p CB1pC:cIdԶh5!!\DdvkB1p1|[Vuw ݚW.eJhY^ j1/թS}2Z:g/NsZGQNVR(h 5[Ch\ijFR,*BMw= BTE =B{;*=RYgd02"8aܘ s󓽊͗ -3E9+׎]>55LHaM(0 Yp/tVfFDUfsg՝>j,/>7YvS5zc g`,}l:z.JEק](AY%=hTBSi׍VvWŗ_~w^c+Uwi\vFkiM(a8*"Nݱ"N"*{ZgꅂFߝ*aIoɰO!`VZ yo+o26glK4*Xua5/n8[,X߸'ƭ>1nqOl |z?3"By0u(q*@8P 2cNr, Nxs#~2XrΔF) r.";(1c?,?gRiN#9]):/x]&21^uek Y*:D4rӭ4T1qHepK጗\cNe8t  buj Zr">&|fsjL ]3J.Di|ȌSgX't;!|ZHhO]%c1 vTJDqP$A<&2af1/{ȭ/{HC/}g6UIRy9)6ۑLfO/(vKlZ%MM%ckq)q j??{Ο !*";bHi<9f9,fl0D"~)aG/^/2 K GH"dm,NM1,A44!K6dE/{{A ^55u[ t_͐*R4)hb֒2D #Z/dZ/@(e1>J3K"l/ٵIb#my!wRKl^U0'u`=l%uv$ӳKؙɻ<-;Uć_a(ox&/m穛/=p ["O䫛ϏZ\n%c%_#@Sg_q6.D-9aw o;gjEz; >? ;YJ8rV;âԶx\)grhs2k,aH Z97)Bfw yH, e45LJ4HCs.g>v~)i=(TjÆ0Ob-;Vs^:#{vڱ7bt-vUPH!/z# H:7b}C-XcYf7*71{Peᇘm&0q%wq`TsVaOͭki,p5[,ХWf3XJ ڙ AR#Dbb{áF$H]u0s|9(P3?ۀ0e*Rt`ՋnTiQӢmUl5^!)7F$ LxO,gidPXG@+p h. ^EA}&;J-إЀA+uq.iRZ0O0IsvS2a5}'#Ur%|df'N.:;:`YeHU~,)vK7- ZXy%@mnMʾ;CY(G5!,Yߛ`GMXGvwDnVJQ=*VŲ|i zCg{r59=Oi , 7 vweܹb3L<۷cg7;+kj~*3.]_O8^^/;M90^{'<໛ӄAd[NA?l.|û>}`*fI4=S{'yLf`W+Z3)TXw:iCy"!y a^Dq:Gv+Lj8q-ZA^`[Z'vI2;h,0.kc EMֻ>|`wI|Z_58xuCzwq>trdZu`5.brzuܓstT_W%5x3l{#U8Ua-v6xdKUsn=E>,m`qAW4Rds=w>H:RHR&ZfVL)r@]P{TC\_U"4$=H0DDVF4%־H "+;ѨD& e@^_"7:C&'<Qio!!XI&@;!p+'UݝL}(R>-d9Cl&7TFe|Sٺ4I>hcSj$=0 ؚI@TL0NއVv(5nMW[hgV'MvmFu.O :lǕ)Dhu`J]r#ʉ_xn=M(ЊܴuԅIZ7Cݛz',qt1a?cWտ ~_^OGebqzc^~x _2nˉP4MγE)ZoVx-m[ ?zOm?VeQS@##g"D #bxMtt99LenJ&[uafA ̍ 2Z]RV-'J2g ""$wXз[JYQG&{;Q`Yٺ¤5yi7OVQsN8,S44 PPv7bR 7T@ }(d ːYb@dTvɥMvP٧r2R0嬽d6EK}P;h2:,oi ذG4Ƈ XHQnsH1jsԥ;n8G7sf4*E*c^8WΒ1(qR(({ 5$仔V@gPK Y!XfsFm!:X@@ĎL$&N:=Pki͆b #X3c0PZiŜ:hQ.'F'jlS:5Ih$2UNUʀhy 9j}A'YE]TQMf!%76.堑VmriCSEzFk WJ<xᓉVd6|ML@Hm Q(Y%{kn!b2; uM]c̉Ȣ7&0@̾d{N mMxdshS&l Dꬭ٣ °!ɢjg1v!<"F2<Ӳ'(%T 9]ϑQ* R#L5:$FG5Z-J cBVzKUOf>n`IȄ 8"U( 4ϲn-: $~$Lj=IU.h!hKꠕJtr& %%}n@|I"'V1E,]&uK|ks(0G|uaRKv#fmכz'rSXT-ЕFF]PQ.󠀻 DYk% UԄ^1TTXZJm_SDGoi5MƕS)'!BJR8LdJ~ǩe\$Tc B1BiL] "_|M ҕWU& B=鄺κaJ_QCo&$[v7}"duV&hiIs(/M9͒$!H$S6$P 田3Yyg%4/Ŷ)b?~=7o|w!! f5Tsx g4ozVæ6Vt_S0{(g g'$OQx9^*h`-JVݨ``]`VR[ /6@U;Ӯ<fQyõ?ObT]yhy٘8l',,`?hiӢE#OA/fV1]Q0N,@=r%Bz;3şǃrgؾ:̫(5XOi6=s0VqDZW*UFT6LvjX+lʲ$P8Yz#aN0[z'cUhSNJEQIʭaLj哣y2:n b; {q>ߥ淪1c:!Yorkw,0u#DRmv3wZlX>ȡ9 z}W7o6`Ν_WLu'Ozbnwغ?(48:&acsZ{ԸOvZ|]O J9"i@V,M֮ |[W_ȲE:dK rf02Fnҳ̓] z;K] `rUI3rYh繬¾NFk(8\{\;T.S C-;|Fdwr W*x=xOUUX~,:!Wu#"^G"ȧwQ`{/ۅWǩmmHeE)I9n#Igf俷++ $-l;MaG0ZIM-Z Q_=v3f^=A[BqfC ˳ɵ4 jT24XڲB= $:+_m*??p4,D.bJ!{vo>wuI1V :6['>tn?U|f_{v-kwߨ[r;}o p۹Z;6.Z)YYɕ]?|ebI~bi kF.6&?PFO[[lr3Xm?b궧kiXV(b'MQ<:R e~,?}cor ]cIya}8I%)tq{޷LZm{iS60l^Ś*'e?v}'CGn TЭ jLwn_1"ݺV!,nlJG_z;n]ePc:]ctYڞ ݺ_DCXD7l +m \~/9^/'U1?_'Wn:̦۟NA9 رj?}vzTvGd^W2v;(w Jp8x2ӃY"W\H]]w[7m+eOm~ߝAn}mb{-Gp ٯ$^n`u%f`y/K:oFWk{_uox$J+`xf*4ȍlk:ؤZ*ijkQ_Jf0brGQb9Ah8H?8A/T -l=`]4$.ڿ1dLQYLAcF;@q-Y^Y2Tń9Q`hN:6똝ʘݟj sϙ;T/ Nd0%fܓ;1)xm`4yׁjeכDu٦DK+"39?D`EepKޤi@"{Φv*Lڝ8sy *"Z401W&32fxQ'2J#JC+?< *~@PeU[LԲ@ɱۃ`JRhT J+mV\+fz +Mp~D+MmNԜh@)*TV->80PJ+mFeu]=JYȶk&C I͒j(4PUigS&mr s 49+'!̵[liԁjZ A SQ ZZ,/uqaȖ~aU\rCI&}] -rX&Iyh0B i:%]^I5V.{•b%}]i]iq_6[zJVTKή-Rb+u幡19.nObBiTh5Qog[.ǜ8&Qt3rcn<BȂςd0x+EY\k1~$QIψK3 {SI$L*ӹqEa_+>Qw5O1OcJ(f|<_s/ccTWFJfH&f֨TKrw/ʤdR fOrѷYkû_>]#qɯN|T[C0QFkYOW뇹fxrh2fEh$vcQ&hT&GG/|OEP2qI:lHd"%Z-dM BA&r $U^PS,F۲6~_r-'~X-% ȷIdc~}3\ܤU]=y{T䅝lͭƉREʙ,ˍ R2q!" Q"PCA\Eps~ChQӔIBє&)@O* tp/AO%P7Y*YR 8& e6iQKv)Tjr=DЀRI WR^CdϜD1rPN+*Ѷ>iWnq3gw^6SYJEe,!-f6އ 0s.gZ##J}ڪFw$ h|z;bRDu|X#3 HHV5.qJzH(kbZp}ncڑ_XaD'%Pc5ף7me*7 ^nFk4z1V^!l w\5(g4z|sEC/EK:ߊ{GHP|Iay񳉋ϥUJƝ^5QZk`SSX'%Zcz8UG8_eR>[+Vn !c7]Ѝ2ۨq&& h3[eH뽧:vG%/6V/xGAT(v{Hp w.*7Z);\V:v_(%YiK5kRVjz=@)+}jmyJt -JϜa6(zcNMHb`f~1eod$}UTN D %1J,HpwD>$0ҥl"EF!H 3ib(/0 c`&!$5 $IeITnףJaI» 0u9.Z5ot6*mUn?Oq*%"kˬ\3IB ^A ]`f eE%2F΋ I@- Ez2Q~20)UF_++X8JD#TpA O5ڔ4W\^x\3Jƙ)`?.Oufu _>&ct~8}:` q4ę|QA.HA45II.vyK֓PlNqqzˣ|xW\qzg|eV+#D8pI~6tS@"[`"ptGΔ[-Ca;oe:"#O/zgVpN{Sҟ“ $ؖ$oo]d^'hpp J=P<<(3[Tۖ(H'gfǺIqfvΒo 7ƙX!z%ߎ[&ծʅ: 1Bf7_fId⼿ iqIž2GIeXpF,!b6%8.ceH5J_H\ZDڌ?VЌj x@z*oףZ EգT*ӞPM) <=_齭7~}մlA71[lG{5H nӾeRvDs!>7ᄌ3ߑQ\hrqq \dot/Ȕ$sʺh/#,SpZm)EL627\IEew m48{wOiB5Ai:mw>)[njM}bL..s͌2rW@3DE'*b(G#h/.cDWk̼n* ?^]gt/>h=ر*{%#J̾t#Muk潟G6 I {E }ûV;3E@6DG6@$j ~o$Yf44WvYX%@N|3a# cD)E'}86eυdy7<ȑRcC5 ˇy8"L2dRRKS%~#)뷺0堦X?GK.0m^O2u^]$3lCG,ݥI#Λ`> bDmc"{moFh35]UAڄYĨdQ2m,Cb6^JT={qe̵{XS ~OE7Mq[%KV*TH$1z<> '\6n hM/]-1Jc7I3|դE |_ME*x-v3c1"ΥjB$~;f#R%@NXq#vɸTr;N]|֕*b[A|Un_f nV z @rB9 L#N //y7zY$I x} zI]~~-Mr xwBSƔ2hA,󛜉Qb|(l486x='T 'c\9֧*SB7ݥ4߱/ƙ9#^(tyOF`5Aξ>3^Π0ڰyq\WM8`G7 ˻J#S_z aZ8`n(Qk] "R 7T'F*593QA'?k66jE]9zN(Eeu3m358,~Up%1n}k^GkE] _gd]g\&BKէ!Ͳ]RxdR aZ3xf:(n KZlХ,CHz^!Z(yTi'Lb*a='#MaD7˟fÕ,$oeD%x0!®yL̗N%Z W+euHbU,|Q. 8[~`@q\Ƚ8bq,樢3qā8V1lCV|ȇ$О,I)8\00oL(xZ#yj!V>|&ߞ S@mU ip=@ ̛B|{Z`{. >0+ND̘Jv{.&2F=W-۞3|$&^ďDŏUjR=Ǐ? 1:jKFn@ \!:fhu/iE1ZI}jqA B :V#p]#,QtKIuSj['׍w+pausbƘj"U-p r4e1kWP-J"Z# /Y`F^į,iA5Qaǯ<0Z|?q6J 9l*z7JYvYVnf>_)BGAҰS-3 蟋3C-NLGՙnH)Atf'; ⧾/kO젓A-Q!ڨ]%jMXp;xjM(XMAGtzEt`@"fcJ?%<;SCTG$Y{ ]w՚FxjMCq!sFvSk)%MZ!eS #; cLq^G2ՠqfQޏQX(-MϣG)LssĪRTFU ~btf[k5>^.TI=_&M3R_h>`&%GkJ7ug+?܄;,SUI伽~t_ANȿ/ B񈢲ur`D!S$"ڈ!e4&qu2@nIoqEmTK=I [B,R\,K)fBlۢPH(BG(R(/{4aTY! 3;AtʏH(+Fa* i /H9$Dځ#DsW\K!!oJde֡ڪ+r<ᇢ耭@Hl/R{$H$>S_8txS֬>_n˕ Œ@' ug8p | 9e}gfP@hO`uEdT_utx|8/Mބp1[#ysz`CU1>;1 =Rr.pP˅pp~K{ُ&[p9g) d4|>q>u~5Ow>={Sw9l/?eZXO3LhdLE\pjUxQŏ+4M[Kη'b2‰1n<_$4ʇ~9&qBh.V+|$4ݞƅ6 \pؖ^|b\޳ܠ-Y)nRL-1=k ɈT{o&<6'+E6#穳0MQkNæLr-$Tۧ$F8ts @,._ ,tZ!d!dSFm8 iM񝚙 nqoBG^Z9u$E8)ETG3Rr3i-OFLqH`Α )edD Ot̏?v oH} lPfOw4Oc+M}+K1sr~v{Dyꩢq׾g=лk/W'!UמO:Y/Ofx7{USR%b6pP!9'~2Vy}wkq@+|Xt0wy`r0C0'<޲wKX{+;O/zO~og?%)<@rXٶ䞬S[׉|r2q<6a꧷.d+1Th,uƁ]yWAg7/9Dps3NHsb~P1s건 ƈTX#܍k1VU\<ѵɷ<'<)Nص?㻴x)E]3ppxJ3HIqvy DD""@T&L>bawM:%H[ꖊ~Ng^ams<~=VTa w蟋lTx#eRt:͝,\RIlNzL+ֽQVY֗MψnZj&.fZ:W2O A`Rma`g2D vw'Ϳ_a1IBZ!(q,?{Ǒ_\XOȧ[XnpY(2oߏ5#[yzg@Ȓ,b 򎯹m(v|~\bBay/ξvB.oNQ$!vܷ/Ca3c]x["a$P(TC{{Iwot?BK2 כE!#a@^8֘NgXfÙbm]jooNDr],Ұwxr~sso?^}?p?[$3ósbM{7q/f>:|ug䌫SV[ϙCԸށc(yš Rh-?pi.2ӕ/_@U>\ȖmLTճXZȶ׳!Eؑs:ׯ 9BvvR;A>_FۆVí c/,HBXwK+^A"c O{`)jăK)JGz6$E ak() CYjƷZ~ h" ֪1p}$,ΫokهX Lσlйccn Z l$%mmyMknZ&FΣHa{g+t OcL>)ΆT3 vDFJ}5|AKE$e,!^270FO&+0:!WѱFYgk0ֈcm6:jThm9{Q$zx".pc-7E . ;]cˊ^@0r׷W6v4ټy12x"N i"/g @!0c Uo{CLd1&yt]PNRB$R!%0}yPqb@h.(V:W!TJsZ=kq$MK*ŀdev2^sucƇ*FQf?uёjר :R=TyK탣$Ĺ|M(c7}_,bU_x4#x{ f)+i1mu6 Z ?,wZ W5:}ч|oDҦ>wv/.F˿.q~.y~6LJ}RQH Q?$2^-wd[:5(A疱R;m"{լRDv Ҵ;B^Na"{8aTS5ҟg]k;`,jrF8TBAkai`Fm+ZO~t(hD*{HSǎ h֨T^jT%ƑjãI33 18 -)ІכlC|.PuO7V3}5썃Q(5+mH0^r 瓵-LJ78pV^qX s^bjׇQHٹ8rbj3SF 8lLE>-9MY).M.930< ԼxB`<1\ K!{ܔ~wCrrmi?3"򗱥R 1H.I En+㐴Ula(>p̗b"xޙd9dnZ"<\͋XgE.^Q&Dό-E0RWI޴>36UI9֬%Qe`6n|( [[ą`ѫK͋Gz~wݻ-/j6؝юO5{~߯Ͳ<,ɕ7W޿>]ޠ 0}'/V#j239Jp5}e9#uJtuJtɇM~x8~z2ti|C~Y{|'f2G=o&*ӯHX={] ǴO? bLhPȅ9T׶b~(ϿзGzBd~tn̞. ݅'\l*7Ն39Euu :NrAmBģz+NTٟj>h`쯱JuNB~\Zw/u 3ݪ|9'ͯ~:?=_/6[;g$>Sx_]rko/2ZJ8I^"l% =j]uz[t; h 5egTGnrNԀC5Y.kۿl̴c*ݷQB;Ac\#TʖTyET,ťOO.d U*C:d=WY*;*!?p[ zz%ɩRzkAॽcp QN5}䭥 [*W{P˗h N C*퉡2ٷ!_hy'Wnd_Fs_=a؛j#$qcզ۫@~1NQZܳT{+*:J %+ѫ`Vd*`Šjϳw y*ZSRNfݤGaj1(QwnLt֭?xuBC^V1Gq[7GQ[AԾu;dɬ[dB6r]ѩ B8ZL-[ <`2^~)?*I36[񵌦2\3쭵w3zڻ"7!Oسp`e'E3mBvq!ӓKq3C#[`IqBr n4(,VMa NgUb֊j\k}Mª^/!̜H=0W ꏈ>ɋ %$5(,&\+\6 $ª~=E?`%ۓIS-3}j&H}ǡ7O+i\y^\!3S\됿ә!5tgMo!^Osq \C-4{ dzjɭQ|1*䠄W!m䏻 sP]r 0PEݣQ}@jLj^H<36L oZiZ %QoJ%żZ ̼1_N 7f$"ݒo'#f C0o3д3$7aX2b̗y*->)bMhHeL'\<|1_)+1!,d(wj7ڜeiB_=hW=6;.톓:QP-\!k}HEnU_Ζ.6i׺Sf6Y( G%[,CZ!zldU@oioO9cI862PL!W)6ʴ1 Q_wi*(?iVGR>gQL>̗o'K E+iDI"RMlg&俙c73|H/("}Н^K8s>$g Bϣ//S͛NM <'pK5(|a6 *{hnz+G>Ƞy/ILMt/LP`nߏ"Qu72WpdR(6Q9aLV/aE6kpeoӇGz%.pJgqJj~zݠ-ŌMEgR"ERtO?Į:4?.'c㟑TŮ 󐠫%湰8Eg*nY[G;LCeSa|B'<n)q7c\?kZS,VPPչ=x~dc+Oޝ\1.OCΒA\/IX7!x7Awasi Z"j )T*5D:"*;.pr5tqZL;̝GmELМI|$p*h#3R>(-/Sw_ҺsUżG>u)FfSRy:V㤌fݖ݁eWǼˍWF5w VZGִ˘Z FK2U߶%#w~擽eu;eni8e;(Z.N׺.TAقf ~", n\oVGnPj}YNt|rZXsɌnjE_5-ͅ=xEo_?u9zvWek ƩS*p3ȚW7߿[:;2T8O|e7S9)u&/a^u'UU0h}m)%xpr7sիC(Oe$bΪ(\&ÔVELsC^<~GLãyߟi5FAyG7dFQeIȗ\Jr8fS1ˋO"Wi ~ a[P9@rGk?m7kX0.^E]lZPe_88JRT{U%ϻjۺ2[nc{i|h]43VCKK"ޜ!Wٜ̉,$x_0G쉬! AwA⫧xvvP*;\}n;xHub- MTjc,ԒF/Ͽj|ftMGElӄ->D3 Tcj5G IcL%)2vj 3wޓľrl}mTƺֶ*ح|"QhZ{nC_,ؓD u4X6MM a1$Ct`ŪWZ > %"ḛc,t,^]ISc@u@'ҳFqQ3:MJI7axZk4hl.|1.9b5RX늾GRZUkuICYDJju-5B,w__ta,D-s?O^j8XO5z>s7绣$qU*q?\.رzɪ;w ua|>X XugJ},p εQY|lbwUF{壋ӵzG<]Oa,uY}>6A2$BXcPhA YNm#AyG^MN6kvNHڹ-ONE2ViHnQ(ޡ[s>J|帿x\V6jlƚ.dA4R@S~lcrSPYϚtO2zƧf{|,vNBѮ2ravXY^S?P/E?*JEA> W{\_5U4y}BMY͏TMcVT7GQ U䲸w 0,~}:lsCUu%׆}L27(l,fX.P&8';`n3JM?\E8pjh ^:E7L{pM~(cbO>O=a)NXquAb5P+ \ż^gq7lu҉ëb<=Wå8ՒBYvyZR0CvF~bJp݊T]KU9g&.Z@IA?KŌaqsh<cŞ0heSDܽ6͛[=Q3ı>_2ԓϳV~3 ӭօxU*9K*!šʨ6\y sc \cX0ȣ'EEbKk1#&[`yL/"N%C$=0V Y+b)n_j7n ?TMɏiUٹ8$8:F`,:C{xFŮ5Ic}ULSJ>:un* dŃ޼vt{ äW< Jn.O2&.e .MTl٧ջS/.֫b)yxgjn5;/'{xz-DDRI5SrhRs"na'@ΣT_@spXNz1+@9jrѸiq}ꯞO/۫^W[b&0 /Z:]W8ZW~͸V7/*_zrm. ]w&ܚٱ_[qfmvzt3kΖO2XA-v1O4q!?@+LKO6"8zTȍy<)(GENl0c7 7vMu,1?PChb!r搇MZ!gO0r*b#8c"XU!=EJ)z_B.I9=˅̘noƴ2zia%()n9⟈8џ8`S@|rCT8D_6Ml9뵬b4;ǭ޶79SW}iGwm_!SsN$a7-6A D4~gII^ɔR7r@ h¶[+V)-C3q{r͌'U2T,w!1q.2ԋFi:nel TrT%*%UT_KkRRS/C͸;JF*ϫ412yWс:;9'7%|3#D'Q/ _*3JT\ɋ`ƜٰK+E)+*gi֭}+GA()FdY2NPԗW/ cl缓QuKxSdڅ* rb, cE%).U$ )_#u/ŝ~yPMUm18`@SuC `5w+e)I0ҳ <;V/RAdhrTʿ+um$ æG|ϝa W߽:P(l[XmgfdE +2NWƴ"wmQ%wY,exݹLi+(Ƣ2r-~F͸ZYA]3yZpWShSfKYc ifscQdHJmfiY29@˕+\rp1 V+iRZ*~QdW3tAMO˫~h6aKbO.!Ɲjΐƹg6h8eðRKS`W,-2֜(REL Lg,_-4Ca%p ZU#hxg0T}ނ&_G٭MSRNY=g0*1gם3ZN?6{WɯB)JL.폶MZ8{nk0{=xM_giћrfX:]fp*ƢVbL|D\ZC/}ЦhРwMh552<1Ze{ߖ ;pY-57l[=>Ndg#P/D QR{lJ@t\j 51yZ3(}7I1&E'G]n3 ;{4ck:oa_N&CWkr_j|"K]RPquJ]!#e-GJ<: H]ǰ`/A 3Td`)#f1_mv u#aٻ5e~J,o94W8H;=P$TeKekY^ìڵӶeJҊ䐠jOt^C'm:/4<􀸍VB>pVB?A`"6vX[Vݔ8HѦ sS hGrV(G4e^͵hm 9'1Tj/KZ}~3Wo]ۻADt DxՇ_t'9fe߸I*ʑNn/I$ؐ tqXDQ `> "+$;hng!)Vw?O(|GK:gmM歽p ]").c·I/ΛWKOnIP[KX^ 0_,8۵[tOgSWap&6ɨ&I?-/q'0g )){[ B&pCڟw nm0+h:cg~8-Ac38;I[б[ )5`^rܡ6a; e!;}…oCpjvRݦ'@٭]v/dwnY_,YrPrп. \C?E]w>Mwggꬊu4tLdEEb6qowTtrcdEuۅjkM4^¦H uXuad *fH(FU,x Ă }"|s712L;~G'·TG)hofd`w>DCJ6K4B4(B%/a^>IH4͇ϨPW''H\<+(&U7169eSXf-͍EVX Rܹ3Mgd)uް:9X;춃G%jOlDcĆ@(U,[+GQx9s;xY_W,> 8|rE2x O[4Y 0ҲDvxgM^]w@5Y`cֲZI<ƚ"[MY+APA|.*É(F^FE5hiʉɳ\05vǴ),V0i%HMmhcK}HjahMyK)ILGRXZJͱybKd)Sa,e fB/}Ju>屔#Rw]<"ptNpM<ڙQn\-\mWaW"KnVD@ -gj  y5DC@ 5(~4l 3b:?v]ٮWZ[uP l>Bq'' eP| d+F;ܭsYu+zi%SI>M#aBd; 14462ms (14:04:31.522) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1851784562]: [14.462153302s] [14.462153302s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.522785 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.523006 4775 trace.go:236] Trace[58725198]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:17.885) (total time: 13637ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[58725198]: ---"Objects listed" error: 13637ms (14:04:31.522) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[58725198]: [13.637078752s] [13.637078752s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.523020 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.525195 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.529957 4775 trace.go:236] Trace[1499191215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:16.703) (total time: 14826ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1499191215]: ---"Objects listed" error: 14826ms (14:04:31.529) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1499191215]: [14.82688907s] [14.82688907s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.529978 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.533178 4775 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.541241 4775 trace.go:236] Trace[1454904241]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:17.580) (total time: 13960ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1454904241]: ---"Objects listed" error: 13960ms (14:04:31.541) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1454904241]: [13.96055475s] [13.96055475s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.541302 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.551652 4775 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.558523 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45984->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.558632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45984->192.168.126.11:17697: read: connection reset by peer" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559182 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559238 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559319 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46288->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559449 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46288->192.168.126.11:17697: read: connection reset by peer" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.648770 4775 apiserver.go:52] "Watching apiserver" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651195 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651554 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652070 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652247 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652312 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652631 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652672 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652699 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656315 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656638 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656646 4775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656872 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658023 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658114 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658266 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658430 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.659661 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.662836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.667499 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:05:44.204410091 +0000 UTC Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.702140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.716145 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.734643 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735321 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735232 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735893 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735981 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736554 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736581 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.737033 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.738941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.738989 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.737384 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739206 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739325 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739563 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739645 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739953 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739987 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740104 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739677 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740201 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740229 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740513 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740937 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741522 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741583 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741611 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741652 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741700 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741724 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741769 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741820 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741858 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741903 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741936 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741968 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742070 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742118 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742142 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742166 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742189 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742212 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742234 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742262 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742310 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742341 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742401 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742466 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742502 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742519 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742535 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742551 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742566 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742585 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742651 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742668 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742684 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742691 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742701 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742742 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742768 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742833 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742909 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742934 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742964 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742995 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743031 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743066 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743101 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743134 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743157 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743179 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743220 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743241 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743262 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743283 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743305 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743326 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743373 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743397 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743419 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743444 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743472 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743494 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743565 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743609 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743654 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743769 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743839 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743864 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743887 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743910 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743932 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743978 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744000 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744024 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744097 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744142 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744163 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744186 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744208 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744230 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744254 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744275 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744320 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744346 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744391 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744457 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744478 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744523 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744544 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744564 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744612 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744654 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744675 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744696 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744719 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744825 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744858 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744889 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745012 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745038 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745084 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745106 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745130 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745153 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745176 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745226 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745296 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745319 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745342 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745363 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745387 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745417 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745466 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745489 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745512 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746518 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746668 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746702 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746739 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746775 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746829 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746866 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746914 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746949 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746983 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747016 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747087 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747190 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747221 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747254 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747290 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747363 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747396 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747655 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747682 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747705 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747727 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747751 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747776 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747844 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747878 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747909 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747968 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748090 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748129 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748298 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748385 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748412 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748435 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748458 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748476 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748496 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748516 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748536 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748602 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748617 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748629 4775 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748643 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748659 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748673 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748687 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748701 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748714 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748729 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.762260 4775 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.771258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.772820 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.773162 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.777313 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743050 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743404 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743570 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743710 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743868 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744014 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744311 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793443 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746191 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746291 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749157 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749391 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749416 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749585 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749792 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749926 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749983 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750288 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.752571 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.752763 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753391 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753614 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.754252 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.757621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766278 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766329 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766427 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766563 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766693 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767083 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767292 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767407 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767529 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767787 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768326 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768631 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768725 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769327 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769350 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769950 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769964 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.771594 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.773075 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.779347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.779703 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.781110 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782432 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783294 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783368 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783731 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783769 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783923 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783990 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784335 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784495 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784627 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785055 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785304 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785548 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785550 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785605 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785882 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786403 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786656 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786783 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786810 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786844 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787146 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787187 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.787244 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.787529 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.788677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.788984 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.790662 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791189 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791312 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791368 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791659 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791657 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792184 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792669 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792571 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793472 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793605 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.793939 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.293913075 +0000 UTC m=+19.288741815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794635 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793580 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794820 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794844 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795003 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795024 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795107 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295081036 +0000 UTC m=+19.289909776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795136 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295128447 +0000 UTC m=+19.289957187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793865 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794285 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794308 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795189 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795224 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.29521814 +0000 UTC m=+19.290046880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795660 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795726 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295710183 +0000 UTC m=+19.290538933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.795829 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796286 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796357 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796866 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797038 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797058 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797176 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797260 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797387 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797431 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.798438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.798645 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799656 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.802296 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.802986 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.803419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805349 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805970 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.806174 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.806264 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.807148 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.807297 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808241 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808449 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808516 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808554 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809041 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809081 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809481 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809577 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809772 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810276 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810380 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811000 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811167 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811170 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811282 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811376 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811511 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811737 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812239 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812865 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.813208 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.813413 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.814419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.816101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.816954 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.817365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.818593 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.818946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.819038 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.819140 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.821595 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.822160 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.824500 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" exitCode=255 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.824617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2"} Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.831330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.835764 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.836547 4775 scope.go:117] "RemoveContainer" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.839536 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.839788 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.841137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854243 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854257 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854265 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854274 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854283 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854292 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854300 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854308 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854317 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854325 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854333 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854343 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854353 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854361 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854369 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854377 4775 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854386 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854396 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854404 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854414 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854423 4775 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854432 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854441 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854449 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854457 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854466 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854475 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854484 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854469 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854492 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854590 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854607 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854622 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854638 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854652 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854623 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854694 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854727 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854743 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854757 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854770 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854782 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854795 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854832 4775 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854844 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854856 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854868 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854880 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854892 4775 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854904 4775 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854916 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854927 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854939 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854950 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854961 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854973 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854983 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854995 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855006 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855017 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855030 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855044 4775 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855060 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855076 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855090 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855122 4775 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855134 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855145 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855157 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855168 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855180 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855191 4775 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855203 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855213 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855225 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855236 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855252 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855264 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855275 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855289 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855305 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855317 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855336 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855347 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855359 4775 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855371 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855382 4775 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855393 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855404 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855414 4775 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855426 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855436 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855447 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855460 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855471 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855482 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855493 4775 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855503 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855513 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855526 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855537 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855548 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855559 4775 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855570 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855582 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855593 4775 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855633 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855645 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855658 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855677 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855688 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855699 4775 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855709 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855720 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855731 4775 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855743 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855754 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855764 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855776 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855787 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855816 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855832 4775 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855842 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855853 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855865 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855877 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855888 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855898 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855909 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855920 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855930 4775 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855942 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855953 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855966 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855978 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855989 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856001 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856012 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856023 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856037 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856053 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856069 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856081 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856093 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856105 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856116 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856126 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856137 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856148 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856158 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856170 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856181 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856198 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856212 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856222 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856233 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856245 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856256 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856268 4775 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856279 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856291 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856301 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856313 4775 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856324 4775 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856335 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856348 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856395 4775 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856409 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856420 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856431 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856443 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856453 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856465 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856475 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856486 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856498 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856510 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856522 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.859272 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.873553 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.882425 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.892999 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.901495 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.909427 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.918253 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.929427 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.936074 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.965331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.972277 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.979923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: W0123 14:04:31.986202 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4 WatchSource:0}: Error finding container 26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4: Status 404 returned error can't find the container with id 26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4 Jan 23 14:04:31 crc kubenswrapper[4775]: W0123 14:04:31.998111 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e WatchSource:0}: Error finding container 7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e: Status 404 returned error can't find the container with id 7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.359716 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.359935 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360167 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360068675 +0000 UTC m=+20.354897455 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360284 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360316 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360321 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360395 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360457 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360402463 +0000 UTC m=+20.355231283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360506 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360565 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360547557 +0000 UTC m=+20.355376337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360335 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360719 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360699111 +0000 UTC m=+20.355527891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360865 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360904 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360928 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.361005 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360982509 +0000 UTC m=+20.355811289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.668031 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:14:33.59790214 +0000 UTC Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.829669 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.829771 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8c5c7aff9c6468ed51eedb93fdc9e478ee18be1bd2ef3a7a1d34fd661af8fba7"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837471 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.845121 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.847138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.848448 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.849939 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.866057 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.879071 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.890397 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.901419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.914081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.929272 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.948703 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.977594 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.003747 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.030027 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.045908 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.057932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.071371 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.084985 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.096789 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.112760 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369624 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369675 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369747 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.369687816 +0000 UTC m=+22.364516626 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369836 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369840 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369960 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.369932913 +0000 UTC m=+22.364761743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369990 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.370009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369863 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370106 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.370056726 +0000 UTC m=+22.364885556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370115 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370136 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370145 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.370135918 +0000 UTC m=+22.364964668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370150 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370200 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.37018902 +0000 UTC m=+22.365017850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.668790 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:40:16.186463366 +0000 UTC Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.713790 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.713915 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.714007 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.714166 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.714275 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.722247 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.724337 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.726979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.727199 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.728841 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.731442 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.732765 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.734513 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.735721 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.736498 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.737657 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.738290 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.739508 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.740038 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.740529 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.741441 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.741964 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.742854 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.743219 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.743746 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.744810 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.745370 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.746480 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.746946 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.748090 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.748583 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.749372 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.750549 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.751018 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.751942 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.752390 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753403 4775 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753395 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753683 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.755297 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.756426 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.756854 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.758589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.759269 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.760198 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.760998 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.762033 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.762485 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.763627 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.764397 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.765439 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.765964 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.766918 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.767476 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.768873 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.769073 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.769407 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.770354 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.770866 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.771764 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.772357 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.772826 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.782125 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.797741 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.814097 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.829761 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.845468 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.983059 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.987208 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.991746 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.998150 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.014725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.032086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.052623 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.071577 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.083609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.096574 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.110990 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.126574 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.149907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.170235 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.184436 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.201511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.218066 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.243606 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.262132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.280514 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.669390 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:22:40.193282639 +0000 UTC Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.725882 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728348 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.735215 4775 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.735534 4775 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736921 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.759215 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763856 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.783210 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.786987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787095 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.812626 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817843 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.831161 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.884763 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.884948 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886901 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989641 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092526 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195186 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298644 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387185 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387449 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387413519 +0000 UTC m=+26.382242289 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387540 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387619 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387635 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387612284 +0000 UTC m=+26.382441054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387548 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387733 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387767 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387677 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387663825 +0000 UTC m=+26.382492605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387910 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387882341 +0000 UTC m=+26.382711151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388020 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388046 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388067 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388132 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.388112157 +0000 UTC m=+26.382940957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.401965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505924 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609427 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.670379 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:31:58.188000595 +0000 UTC Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713526 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.713705 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713935 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.714109 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.714307 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.864228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.885362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.902812 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.947137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948904 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.957975 4775 csr.go:261] certificate signing request csr-hz74m is approved, waiting to be issued Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.977379 4775 csr.go:257] certificate signing request csr-hz74m is issued Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.983042 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.006540 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.008078 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kv8zk"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.008337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.010238 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.010323 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.011223 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.028377 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.048088 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051187 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.086048 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.091666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.091722 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.104599 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.132793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153486 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.166507 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.187565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192650 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192825 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.200500 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.214166 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.215522 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.229748 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.243025 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.253648 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255429 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.263980 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.273093 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.321493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: W0123 14:04:36.332749 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e25021_b268_4a6c_851d_43eb5504a3d2.slice/crio-8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab WatchSource:0}: Error finding container 8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab: Status 404 returned error can't find the container with id 8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360654 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360683 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463570 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.508341 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hpxpf"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.508691 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.509864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-4q9qg"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.510461 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511264 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511278 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511963 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.512019 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.513940 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8j5kp"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.514593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.514995 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.515730 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.515753 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516023 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516232 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516448 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.517096 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.540236 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.559725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566395 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.574319 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.585424 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595630 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595776 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595837 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.598608 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.612229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.629302 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.654638 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669598 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.670795 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:19:26.79075883 +0000 UTC Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.689362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696372 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696478 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696925 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696990 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697017 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697163 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697232 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697324 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697463 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698247 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698328 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698420 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698480 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698631 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.699741 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.700069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.715392 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.760013 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771632 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.784306 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.797180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800735 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800784 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800898 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801059 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801142 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801093 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801207 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801402 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802030 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802358 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802376 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802423 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802439 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802469 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802485 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802502 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802518 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802540 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802768 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802815 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802850 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802895 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803368 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803380 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803414 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803438 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.807524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.819533 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.825645 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.826301 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.828183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.830939 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.842371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.844369 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: W0123 14:04:36.853061 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fea0767_0566_4214_855d_ed0373946271.slice/crio-1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26 WatchSource:0}: Error finding container 1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26: Status 404 returned error can't find the container with id 1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26 Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.858282 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.863533 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.867088 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"bfd5db624b10f5d55f84c2d097f28815ba13871d7c58be819f1a0199a386f3e8"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.873425 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv8zk" event={"ID":"c6e25021-b268-4a6c-851d-43eb5504a3d2","Type":"ContainerStarted","Data":"a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.873492 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv8zk" event={"ID":"c6e25021-b268-4a6c-851d-43eb5504a3d2","Type":"ContainerStarted","Data":"8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878574 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.880090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.887165 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.917395 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.930234 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.931153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.933977 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934109 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934144 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934233 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934263 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934373 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934433 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.943932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.959355 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.973919 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.978740 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 13:59:35 +0000 UTC, rotation deadline is 2026-10-15 06:36:12.568472625 +0000 UTC Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.978855 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6352h31m35.58962191s for next certificate rotation Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980940 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980962 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.989717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003529 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003603 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003640 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003769 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003814 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003837 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003913 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003926 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.011649 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.028646 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.044096 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.059212 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.074907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083569 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.097045 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104326 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104350 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104365 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104362 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104432 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104414 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104517 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104473 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104791 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104897 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104924 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104952 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104970 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105020 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105113 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105157 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104921 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105491 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105518 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105626 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.109466 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.112297 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.126859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.140774 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.156385 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.169293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.180315 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186293 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.193418 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.205883 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.219090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.243958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: W0123 14:04:37.272912 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd5906e8_fa10_4ad1_b8c2_6bf9d00a9c06.slice/crio-c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62 WatchSource:0}: Error finding container c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62: Status 404 returned error can't find the container with id c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294136 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.296965 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.321576 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397501 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499853 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.671951 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:47:04.141968643 +0000 UTC Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706880 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713634 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713710 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713638 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.713843 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.713999 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.714236 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809433 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.888873 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" exitCode=0 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.888983 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.889062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894712 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083" exitCode=0 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerStarted","Data":"ff86fb0136c263f06815ca9405f4979fa529e7b493f52b56a3db5760f1f5fb00"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.897895 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.897937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.900646 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.906293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914502 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.918727 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.931256 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.947474 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.960116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.972981 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.983411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.996847 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.010366 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.024278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.035640 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.047530 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.065419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.076495 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.089158 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.104040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.116298 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119274 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.137243 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.157950 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.171645 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.180850 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.192356 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.204058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.216332 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223105 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223183 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.228147 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.240886 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.263202 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.273036 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.325964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428743 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.532002 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634663 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634711 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634759 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.672783 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:46:27.17446305 +0000 UTC Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739178 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.805322 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dwmhf"] Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.805739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.807710 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.807951 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.808173 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.810010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.819905 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822717 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822751 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.834220 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.850043 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.870739 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.887137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.905678 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0" exitCode=0 Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.905782 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910182 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910223 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910237 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910250 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910274 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.913541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923517 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923838 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.924581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.932278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.946382 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.953074 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.961305 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.974993 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.986626 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.000325 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.018363 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.031100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.041565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.056292 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.067091 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.079622 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.102230 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.114417 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.122953 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.128914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.134782 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.147043 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.151000 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: W0123 14:04:39.155494 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5473290b_b658_4193_9287_af63cfc2a1c9.slice/crio-55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502 WatchSource:0}: Error finding container 55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502: Status 404 returned error can't find the container with id 55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502 Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.161181 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.180252 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.192199 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.208200 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.232265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.251793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255888 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.256019 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.282097 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358697 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358780 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427594 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427692 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.427671752 +0000 UTC m=+34.422500502 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427825 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427860 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427918 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427936 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427935 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427948 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427988 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.42798118 +0000 UTC m=+34.422809920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428002 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.42799672 +0000 UTC m=+34.422825460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428013 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428035 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428133 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.428112223 +0000 UTC m=+34.422940973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428051 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428170 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428212 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.428204066 +0000 UTC m=+34.423032826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461350 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461373 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563414 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.665975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666096 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.673746 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:10:46.819302227 +0000 UTC Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713354 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713401 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713456 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713515 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713707 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769119 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.874141 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.916786 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa" exitCode=0 Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.916859 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.919485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwmhf" event={"ID":"5473290b-b658-4193-9287-af63cfc2a1c9","Type":"ContainerStarted","Data":"5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.919560 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwmhf" event={"ID":"5473290b-b658-4193-9287-af63cfc2a1c9","Type":"ContainerStarted","Data":"55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.948339 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.965054 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977265 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.985083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.017892 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.030442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.048106 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.063330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079493 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.082774 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.100116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.118662 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.137968 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.153323 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.174998 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181580 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.186390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.202066 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.215103 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.226263 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.242870 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.254541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.267697 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.280651 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283917 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.289891 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.302358 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.315567 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.326586 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.343976 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.365796 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.378910 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.385983 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386056 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.390558 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.406275 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488566 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.674507 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:47:32.171102222 +0000 UTC Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696693 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800155 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800315 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945593 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.952243 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a" exitCode=0 Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.952336 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.976933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.008735 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.032785 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049787 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.052596 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.069224 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.087505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.098322 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.117543 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.136274 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.154031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157684 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.168702 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.184702 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.197141 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.208788 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.236926 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260238 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362699 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362767 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.466030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.466049 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569103 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.674773 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:17:25.205481822 +0000 UTC Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713065 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713119 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713158 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713272 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713379 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713581 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777654 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777663 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884543 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.960846 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.964628 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf" exitCode=0 Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.964677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.983632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987881 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.015979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.033115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.055393 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.077816 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.092961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093048 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093064 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.108252 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.122211 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.140116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.156424 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.169505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.185075 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.195965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.195999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196036 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.204423 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.216020 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.234047 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.246660 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298730 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298825 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402661 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506434 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506452 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609661 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609778 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.675874 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:39:32.504875224 +0000 UTC Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712431 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712476 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.919001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.919016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.972116 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988" exitCode=0 Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.972173 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.988872 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.002947 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.015568 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022095 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022154 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.028903 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.081443 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.104298 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.122518 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125580 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.133431 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.152888 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.165048 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.175378 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.187737 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.198683 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.208437 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.226748 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227437 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330238 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330747 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.433946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.433989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434031 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.521969 4775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541936 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541947 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.643957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.643989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644042 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.676959 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:22:46.144080737 +0000 UTC Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713508 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713606 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713529 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713698 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713926 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.732909 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747304 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.751632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.768230 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.794152 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.814411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.831307 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.853127 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854155 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.871680 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.885831 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.907042 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.919603 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.937172 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.959299 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.973581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980769 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980848 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980862 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.984963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerStarted","Data":"2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.990698 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.005608 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.011570 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.013142 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.026183 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.038142 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.058996 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059940 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.060009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.060021 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.069606 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.086940 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.104617 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.120739 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.135710 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.155061 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163231 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.171055 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.183678 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.205503 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.218943 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.229822 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.246364 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.258324 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265289 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.270195 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.284537 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.305333 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.319031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.336134 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.355550 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.367945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.367996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368044 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.370663 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.383968 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.399732 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.411225 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.424086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.443077 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.457830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470735 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573895 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677109 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:48:02.468263613 +0000 UTC Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677780 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677906 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677950 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.779991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780147 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.883018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.883093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987267 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988939 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.007915 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012830 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.035376 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039662 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.054040 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058679 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.075296 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.095966 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.096138 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099875 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202250 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304664 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304742 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406892 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508821 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611823 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.677576 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:46:47.758020107 +0000 UTC Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713162 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713524 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713643 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714889 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.818002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.818016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920458 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.022984 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023413 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126277 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229269 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332709 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440444 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.542930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.542998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543075 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646339 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.678707 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:06:27.931427106 +0000 UTC Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748735 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748770 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852323 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955179 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.997232 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.001419 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" exitCode=1 Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.001481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.002633 4775 scope.go:117] "RemoveContainer" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.021300 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.052116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059775 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.074435 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.091019 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.105696 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.125381 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.146609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164188 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.170327 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.194265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.211404 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.229409 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.244480 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.258330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.275570 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.296821 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370480 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.452922 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474146 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.477006 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.499834 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515000 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515186 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515226 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515194093 +0000 UTC m=+50.510022843 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515261 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515338 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515412 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515415 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515447 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515460 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.51545118 +0000 UTC m=+50.510279930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515471 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515537 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515516102 +0000 UTC m=+50.510344882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515646 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515744 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515775 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515699 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515934 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515900252 +0000 UTC m=+50.510729032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.516059 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515973394 +0000 UTC m=+50.510802164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.524556 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.543846 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.563620 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.576981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577102 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.583921 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.607437 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.625793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.652992 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.668924 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679349 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:43:34.23487583 +0000 UTC Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679941 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680025 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680400 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.693410 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.706095 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713546 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713574 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713618 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713688 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713839 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713977 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.717180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.740137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784230 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887259 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990792 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.006032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.008628 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195971 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402177 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505655 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.687230 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:28:18.571652395 +0000 UTC Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812924 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.011988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017312 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.030140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.086005 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.102603 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.128428 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.143284 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.155451 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.170329 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.183390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.195304 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.212229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.241650 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.253051 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.273957 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.290115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.305069 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325284 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428655 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428716 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428777 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.532923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533107 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.687922 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:46:22.576718274 +0000 UTC Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713576 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713645 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713716 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.713913 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.714014 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.714132 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739206 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842888 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842942 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.843015 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.869543 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw"] Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.870151 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.873159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.873614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.886542 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.919998 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.937436 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944435 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944480 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944514 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945753 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.950441 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.968060 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.982797 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.996977 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.012562 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.017202 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.017857 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020861 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" exitCode=1 Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020985 4775 scope.go:117] "RemoveContainer" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.021650 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.021965 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.030248 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.045971 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046179 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046268 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046307 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.047068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.047080 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048116 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.062041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.062522 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.068271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.078823 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.093538 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.106930 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.124257 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.149028 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151895 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.167641 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.183085 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.191561 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.197452 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.213698 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.231159 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256381 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.268745 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.286347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.314090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.336933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.358517 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.359894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360243 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.371828 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.385552 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.396657 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.411390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.432452 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.450367 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463491 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463535 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.566627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567212 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567277 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.639241 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.639756 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.639831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.656446 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670286 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670388 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.688065 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:26:29.757848991 +0000 UTC Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.694642 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.714162 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.731257 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.745058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.753077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.753131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.755719 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.768098 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.781335 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.792880 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.804916 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.824143 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.838079 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.850472 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.854046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.854084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.854185 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.854235 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:51.354221941 +0000 UTC m=+38.349050681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.869316 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.873829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.887145 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.903819 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.994999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995114 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.033610 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.037840 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.038023 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042741 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"2cec27a59d2b24281c4cc8f2d0fc6df782eae0440f03c9763858dbd21fd19b2f"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.065560 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.080419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.094529 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097664 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097677 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.113392 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.124260 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.138440 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.153328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.167726 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.184049 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.197081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199989 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.213163 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.231314 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.245508 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.263571 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.295714 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303243 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303286 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.312970 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.331209 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.350627 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.359558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.359961 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.360127 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:52.36008736 +0000 UTC m=+39.354916200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.368390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.383763 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.405975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406082 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406143 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.415266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.433544 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.449226 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.465483 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.483269 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.500260 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508370 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.524172 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.541754 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.559957 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.576331 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.593905 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610673 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.616819 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.636337 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.651132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.688532 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:46:27.559758946 +0000 UTC Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.713062 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.713286 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.713564 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.714034 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.714652 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.024054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.024185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127457 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.230892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231674 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.334953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335525 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.375261 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.375575 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.375692 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:54.375665149 +0000 UTC m=+41.370493929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439168 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543074 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543241 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.646962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.648242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.689450 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:01:18.208473788 +0000 UTC Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.712936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.713303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751944 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855120 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958452 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.959056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.959239 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.063001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.063145 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167483 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270691 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270979 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374154 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374208 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476821 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.579979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683265 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.689589 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:11:06.251875357 +0000 UTC Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713096 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713271 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.713500 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.713727 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.714059 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.734786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.750915 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.767613 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.820652 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.832640 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.842214 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.855040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.866569 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.877753 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890920 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.901076 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.919979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.947330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.962499 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993436 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.003511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.027251 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.038551 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096461 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302745 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.404157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.404411 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.404523 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:58.404492866 +0000 UTC m=+45.399321646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406289 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406401 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.693540 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:40:19.496234121 +0000 UTC Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.713391 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.716015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.716035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818853 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922450 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025819 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129200 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129230 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232315 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331941 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331954 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.349908 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354458 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354505 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.370706 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375124 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.393931 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398819 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.414100 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418212 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418232 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.435636 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.435817 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437362 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540223 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643363 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.693886 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:33:01.854468558 +0000 UTC Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713378 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713508 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713695 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713871 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746834 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850477 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954317 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057728 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.160957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.160998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161032 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369854 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.478837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479188 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582155 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582217 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.685951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686085 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686102 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.694564 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:36:42.266968699 +0000 UTC Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.714048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:56 crc kubenswrapper[4775]: E0123 14:04:56.714260 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789218 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.892003 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995623 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098472 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.201003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.201018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304843 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.407943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408040 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511173 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614976 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.695776 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:26:52.383448779 +0000 UTC Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.713439 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.713945 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.714052 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.714160 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.714435 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719776 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823642 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927190 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134432 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237313 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340268 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443858 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.456216 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.456433 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.456505 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:06.456486429 +0000 UTC m=+53.451315169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546715 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546731 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650273 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.696227 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:39:57.603004679 +0000 UTC Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.713793 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.714037 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753495 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856634 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959633 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062501 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.270007 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372902 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372941 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476955 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682620 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.696845 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:14:30.167411392 +0000 UTC Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713218 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713324 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713350 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713460 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713669 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785497 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785525 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991313 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991431 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093566 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195837 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298758 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401231 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503658 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503669 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608663 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608709 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608741 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.697300 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:58:10.276233665 +0000 UTC Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712555 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712951 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:00 crc kubenswrapper[4775]: E0123 14:05:00.713124 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815666 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918684 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918728 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022657 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125199 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227794 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330480 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537187 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640456 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.698099 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:03:23.403345058 +0000 UTC Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.713715 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713760 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.713993 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.714188 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744281 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744355 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.950994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951111 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054467 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156977 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259516 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259610 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259655 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466681 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466707 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.569943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570060 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672511 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672663 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.698326 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:19:53.78024853 +0000 UTC Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.713928 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:02 crc kubenswrapper[4775]: E0123 14:05:02.714075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.714930 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.878946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982261 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.086848 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.089392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.089745 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.102786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.116011 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.132830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.143876 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.156572 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.174853 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186653 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.189725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.203496 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.215073 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.230407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.258259 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.271231 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288153 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.289012 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.310527 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.330970 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.347789 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.369352 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390865 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492872 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492883 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595199 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614619 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614731 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614751 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614849 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614930 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.614891383 +0000 UTC m=+82.609720153 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614978 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614986 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.614971825 +0000 UTC m=+82.609800605 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614999 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615014 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615037 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615081 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615049 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615103 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615055 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.615038587 +0000 UTC m=+82.609867427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615151 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.615127389 +0000 UTC m=+82.609956249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615171 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.61515938 +0000 UTC m=+82.609988280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698550 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:44:10.699649447 +0000 UTC Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.713651 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.713886 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.714473 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.714623 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.714914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.715057 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.739784 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.761525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.777161 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.797605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801902 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801947 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.840626 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.853514 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.866402 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.885014 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.900432 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.904929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905088 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.914907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.934690 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.950205 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.965792 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.975994 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.991098 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.006981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007053 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007885 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.019199 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.095548 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.096591 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100089 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" exitCode=1 Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100177 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.102891 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:04 crc kubenswrapper[4775]: E0123 14:05:04.103325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110565 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110617 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.128901 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.150439 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.167572 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.199203 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.212017 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213774 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.229078 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.247605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.257565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.268323 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.281512 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.293799 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.305851 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317952 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.320328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.334081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.352504 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.364418 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.374412 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420949 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627995 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.699097 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:53:46.858417138 +0000 UTC Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.713703 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:04 crc kubenswrapper[4775]: E0123 14:05:04.714022 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.730999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731108 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833897 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937196 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.105281 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.110045 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.110278 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.130189 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143840 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143869 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.149694 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.163597 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.186131 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.201738 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.218170 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.229681 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.243040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247373 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.263216 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.279668 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.298924 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.316553 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.333029 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350009 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350424 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.380462 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.397457 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.412505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.452952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.452998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555297 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.659065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.700229 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:40:30.769569416 +0000 UTC Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705243 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714023 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714030 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714204 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714348 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714042 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714563 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.729150 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734504 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.753899 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.758930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.758987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.779514 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784447 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784557 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.804743 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.830701 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.830968 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040291 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246900 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350223 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.452926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453075 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.544422 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.544603 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.544661 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:22.544642716 +0000 UTC m=+69.539471466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.658788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659906 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.701795 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:31:27.970720928 +0000 UTC Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.713176 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.713576 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.762981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763136 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969235 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072296 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176646 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176661 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.403853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.403977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404122 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.507636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.612649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613719 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.703010 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:50:00.775772596 +0000 UTC Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713451 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713539 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.713632 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.713896 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.714043 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716622 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716640 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716665 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716682 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819787 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923366 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127890 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230794 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333657 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333688 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.436917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.436990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540339 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.643928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644082 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.703674 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:28:20.857054239 +0000 UTC Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.713392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:08 crc kubenswrapper[4775]: E0123 14:05:08.713571 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.746687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.746738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.747971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.748180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.748201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851879 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954332 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954374 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056827 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159276 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261656 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.363960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.363998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.430487 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.445131 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.452466 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466909 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.470310 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.490106 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.510370 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.542438 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.556767 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569970 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.575180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.597717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.618482 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.635266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.661197 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673197 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.680351 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.703668 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.704108 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:17:06.930551676 +0000 UTC Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713343 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713416 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713497 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.721022 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.741778 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.764086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775891 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775994 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.779933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878931 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981822 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084349 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.187025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.187043 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290361 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392484 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495340 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701518 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.704662 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:40:00.240193891 +0000 UTC Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.713115 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:10 crc kubenswrapper[4775]: E0123 14:05:10.713353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804646 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804655 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908861 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011508 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114134 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114172 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218143 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321871 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425132 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.528019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.528036 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631452 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.705338 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:39:59.838675144 +0000 UTC Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.713923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.714016 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.714021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714142 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714286 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714439 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734666 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838929 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942385 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.044979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045082 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147150 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249922 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353571 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456396 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559245 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559277 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559288 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.705472 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:08:06.077152554 +0000 UTC Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.713423 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:12 crc kubenswrapper[4775]: E0123 14:05:12.713648 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764471 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867572 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.970990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971116 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.073993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074068 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074081 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177138 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279656 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279668 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486374 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589302 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.693013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.693035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.706212 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:21:22.425653057 +0000 UTC Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.713956 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.714077 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714159 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.714088 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714461 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.733511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.750445 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.763143 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.780278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800337 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800021 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.814704 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.830898 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.845304 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.860494 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.876733 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.890712 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.902343 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.919155 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.943482 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.957873 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.976629 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.990517 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.004830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:14Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007789 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007835 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111569 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.214001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316277 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316401 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418957 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522357 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625658 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.707140 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:05:05.658986093 +0000 UTC Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.713567 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:14 crc kubenswrapper[4775]: E0123 14:05:14.713784 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.729011 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.934991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935087 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038376 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141614 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347511 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450517 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553876 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.707608 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:02:49.414223525 +0000 UTC Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.712946 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.712980 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.713153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713297 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713434 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760172 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760210 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862603 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965891 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068350 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124576 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.142075 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.163022 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.166947 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167055 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.181285 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184938 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.201422 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.205969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206099 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.224738 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.224954 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226984 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330294 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434896 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434935 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.539015 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.708527 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:34:43.591372412 +0000 UTC Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.713900 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.714360 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.714522 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.714677 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747776 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.850898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851657 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954247 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056974 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263716 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263763 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366864 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469915 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573109 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675629 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.709604 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:10:10.784158573 +0000 UTC Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.712955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.713010 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.712956 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713204 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713285 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.880990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881118 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983306 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085937 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292325 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395316 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497640 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705278 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.710476 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:36:53.814828794 +0000 UTC Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.713718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:18 crc kubenswrapper[4775]: E0123 14:05:18.713889 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911715 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.326023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.326059 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.428986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429087 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429276 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532356 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.711275 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:21:14.232977516 +0000 UTC Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.713497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.713700 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.714150 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.714261 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.714494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.714598 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738284 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738359 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738393 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841580 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841625 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841642 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.945005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.945026 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150246 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252987 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.354950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560653 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663157 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663237 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.711945 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:08:29.127765757 +0000 UTC Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.713299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:20 crc kubenswrapper[4775]: E0123 14:05:20.713486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765553 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869654 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971874 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074578 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176927 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279437 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381962 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.484929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.484989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588068 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690886 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.712345 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:59:03.957042044 +0000 UTC Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713552 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713636 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713679 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713706 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713776 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713839 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794995 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898852 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001741 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104704 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207191 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514055 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514086 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616345 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.617992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.618183 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.618297 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:54.618275037 +0000 UTC m=+101.613103777 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.712894 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:59:09.090882197 +0000 UTC Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.712994 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.713118 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719841 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719854 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821937 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935158 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.140006 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182776 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" exitCode=1 Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.183185 4775 scope.go:117] "RemoveContainer" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.197058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.207780 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.221636 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.240653 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242158 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.256051 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.268585 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.277254 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.286276 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.309192 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.334347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344838 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.351781 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.366489 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.378699 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.387403 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.397407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.415317 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.430207 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.442023 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447580 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550137 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.652946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.652987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713606 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:28:40.045438346 +0000 UTC Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713746 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713883 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714104 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.714123 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714233 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714328 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.733621 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.752647 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755470 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.767273 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.791414 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.805534 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.830630 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.849419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857698 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.860573 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.874420 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.892031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.907291 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.926007 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.940524 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.956442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960273 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.977563 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.009020 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.028509 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.039194 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063414 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.189041 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.189127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.206986 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.221631 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.241697 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.255474 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.268077 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270313 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270383 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.279148 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.288595 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.302267 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.320415 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.332056 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.342281 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.356139 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.365485 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373095 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373138 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.379639 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.405067 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.418209 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.430582 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.444343 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475087 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.578956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579149 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682827 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.713738 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:36:51.815175239 +0000 UTC Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.713821 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:24 crc kubenswrapper[4775]: E0123 14:05:24.713990 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786112 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888880 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992405 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096141 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198877 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302795 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406248 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613036 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613148 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.713936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.713910 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:10:18.186619784 +0000 UTC Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.714034 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.714094 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714238 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714342 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715891 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819428 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922418 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025369 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127657 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334670 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334689 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540622 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540635 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.608656 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613379 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.631522 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.637016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.655433 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.675872 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680945 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.695740 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.695903 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697984 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.713369 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.713478 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.714706 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:06:28.198676518 +0000 UTC Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800822 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006849 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.007014 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110453 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.212944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418481 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418533 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521506 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521559 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624790 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713851 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713976 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713881 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714070 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714145 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714307 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.715721 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:18:52.693503781 +0000 UTC Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.829931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934845 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.038333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245266 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348397 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450966 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553571 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553620 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657181 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.713878 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:28 crc kubenswrapper[4775]: E0123 14:05:28.714113 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.715934 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:48:48.525311123 +0000 UTC Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760630 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864065 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864338 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967157 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.277076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.277100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485343 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.588889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692849 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692882 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713453 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.713545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.713729 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.714104 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.716577 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:57:49.640538967 +0000 UTC Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795586 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898264 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001660 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104340 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207616 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413844 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517699 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517781 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.621916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.621997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622069 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.713908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:30 crc kubenswrapper[4775]: E0123 14:05:30.714243 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.716755 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:49:11.370837575 +0000 UTC Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724939 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724954 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828687 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931434 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931470 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033978 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137386 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341855 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445351 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445382 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548677 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548751 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651241 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713149 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713161 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.713505 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.713995 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.714373 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.714638 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.717072 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:32:48.458715443 +0000 UTC Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.729277 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.754076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.754146 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.856695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.856943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960626 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062780 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062817 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062830 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.165993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166099 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.220726 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.225599 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.226291 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.241110 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.251532 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269307 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.270200 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.281720 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.291468 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.306010 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.321717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.333932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.347335 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.361706 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.385466 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.407868 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.422718 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.440426 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.454903 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.473717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.476014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.476029 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.487350 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.514442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.578993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:32 crc kubenswrapper[4775]: E0123 14:05:32.713274 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.718128 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:16:16.511935552 +0000 UTC Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785316 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.990988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991048 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991078 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116717 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219825 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323636 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.426009 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528117 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528175 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630775 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630815 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.713889 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.713957 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714101 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.714137 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714291 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.718916 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:19:07.165566257 +0000 UTC Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.730421 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734112 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.743670 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.763256 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.773618 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.792581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.806338 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.821604 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.837193 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.850527 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.864584 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.884052 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.903147 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.918740 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.934119 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938461 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938499 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.946240 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.958751 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.974251 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.988197 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.003081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041510 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.143945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144057 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.235487 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.236306 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239680 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" exitCode=1 Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239781 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.240834 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:34 crc kubenswrapper[4775]: E0123 14:05:34.241064 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247460 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.264549 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.288201 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.307949 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.346610 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:33Z\\\",\\\"message\\\":\\\".go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 14:05:32.967045 6836 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967156 6836 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967220 6836 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967277 6836 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 14:05:32.967725 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:05:32.967779 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 14:05:32.967787 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 14:05:32.967869 6836 factory.go:656] Stopping watch factory\\\\nI0123 14:05:32.967869 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:05:32.967893 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0123 14:05:32.967883 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352913 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.366777 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.383955 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.401297 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.420605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.437962 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456988 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.459203 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.477963 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.489171 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.503503 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.516159 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.527233 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.539543 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559208 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559615 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559648 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559689 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.572229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.584100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662420 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.713193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:34 crc kubenswrapper[4775]: E0123 14:05:34.713349 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.719264 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:19:24.488313815 +0000 UTC Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765658 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765730 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765826 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871488 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.975217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.975464 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079868 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.182910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.182997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.245332 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285773 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491963 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.594955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595050 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663194 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663474 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.66343762 +0000 UTC m=+146.658266360 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663543 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663617 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663667 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663689 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663709 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663740 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663761 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663771 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663743968 +0000 UTC m=+146.658572748 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663787 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663638 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663867 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663841921 +0000 UTC m=+146.658670691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663899 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663886142 +0000 UTC m=+146.658714922 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663934 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.664030 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.664063 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.664054757 +0000 UTC m=+146.658883597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698296 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698353 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713196 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713332 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713596 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713717 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713775 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.719638 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:15:11.893166892 +0000 UTC Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906869 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906915 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114463 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365659 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365671 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571351 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673931 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.713686 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:36 crc kubenswrapper[4775]: E0123 14:05:36.713883 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.720738 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:03:41.16099696 +0000 UTC Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776258 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879219 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983381 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056665 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056685 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.081381 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.105971 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112415 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.132814 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138884 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.157047 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163646 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.183850 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.184033 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186440 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.289912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290123 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393461 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496824 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496849 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600884 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703850 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713918 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713996 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714109 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714241 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714455 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.722772 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:37:05.141246284 +0000 UTC Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805861 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908745 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012285 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115397 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220187 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.323841 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324629 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428445 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.531393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.531925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532462 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636167 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.713468 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:38 crc kubenswrapper[4775]: E0123 14:05:38.713686 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.723637 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:24:36.791156414 +0000 UTC Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739193 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843690 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946447 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946462 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154133 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.275987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378577 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481572 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481611 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481631 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584972 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.689020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.689048 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713250 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713495 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713662 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713837 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.724753 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:27:13.173459325 +0000 UTC Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792762 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792938 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896464 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000652 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104358 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.207920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.207997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208039 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208059 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312304 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.423180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424170 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527559 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630330 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.713508 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:40 crc kubenswrapper[4775]: E0123 14:05:40.713754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.725675 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:07:06.341287461 +0000 UTC Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.732962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733037 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.837012 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940321 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043909 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043939 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146552 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352291 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456392 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456450 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559479 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662926 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713619 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713643 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714038 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714346 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714429 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.726271 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:52:56.129420744 +0000 UTC Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.869974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870086 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973653 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077422 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180359 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283423 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387590 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594218 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696516 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.713705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:42 crc kubenswrapper[4775]: E0123 14:05:42.713854 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.726418 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:05:53.099374614 +0000 UTC Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005494 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.108969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109051 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109094 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212385 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315775 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418981 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522411 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522487 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626391 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715089 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715172 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715363 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715476 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715641 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715717 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.727397 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:17:13.995998893 +0000 UTC Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729599 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.740775 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.758850 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.776940 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.798838 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:33Z\\\",\\\"message\\\":\\\".go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 14:05:32.967045 6836 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967156 6836 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967220 6836 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967277 6836 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 14:05:32.967725 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:05:32.967779 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 14:05:32.967787 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 14:05:32.967869 6836 factory.go:656] Stopping watch factory\\\\nI0123 14:05:32.967869 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:05:32.967893 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0123 14:05:32.967883 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.816053 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832893 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.850615 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.865525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.878196 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.889328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.898949 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.910494 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.923099 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.934384 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935492 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.947681 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.959463 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.976472 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.988083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.997936 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037946 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348204 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451918 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554979 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657961 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.712958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:44 crc kubenswrapper[4775]: E0123 14:05:44.713147 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.727670 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:51:43.217096209 +0000 UTC Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.760993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761085 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864236 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967403 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071520 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175186 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.277920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278123 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587840 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691477 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713311 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713474 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713589 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713758 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713943 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.728290 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:28:47.819455143 +0000 UTC Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794338 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.897911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.897985 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898048 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001210 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208700 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517512 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621349 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.714041 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:46 crc kubenswrapper[4775]: E0123 14:05:46.714551 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725891 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.726018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.728950 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:55:57.242216556 +0000 UTC Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829412 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933418 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037215 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142472 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247648 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268930 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.343960 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr"] Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.344660 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348659 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348794 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348846 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.352338 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.398170 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.398141921 podStartE2EDuration="1m16.398141921s" podCreationTimestamp="2026-01-23 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.398046358 +0000 UTC m=+94.392875128" watchObservedRunningTime="2026-01-23 14:05:47.398141921 +0000 UTC m=+94.392970701" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404251 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404370 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404636 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.437154 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podStartSLOduration=71.437133474 podStartE2EDuration="1m11.437133474s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.436876137 +0000 UTC m=+94.431704887" watchObservedRunningTime="2026-01-23 14:05:47.437133474 +0000 UTC m=+94.431962234" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.488734 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kv8zk" podStartSLOduration=72.488714495 podStartE2EDuration="1m12.488714495s" podCreationTimestamp="2026-01-23 14:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.488207061 +0000 UTC m=+94.483035841" watchObservedRunningTime="2026-01-23 14:05:47.488714495 +0000 UTC m=+94.483543235" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506236 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506308 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506346 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506403 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506493 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hpxpf" podStartSLOduration=71.50646927 podStartE2EDuration="1m11.50646927s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.50610021 +0000 UTC m=+94.500928990" watchObservedRunningTime="2026-01-23 14:05:47.50646927 +0000 UTC m=+94.501298010" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.507725 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.516671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.537141 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.540431 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" podStartSLOduration=71.540404598 podStartE2EDuration="1m11.540404598s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.524450121 +0000 UTC m=+94.519278861" watchObservedRunningTime="2026-01-23 14:05:47.540404598 +0000 UTC m=+94.535233338" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.575075 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.575053686 podStartE2EDuration="38.575053686s" podCreationTimestamp="2026-01-23 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.560010763 +0000 UTC m=+94.554839523" watchObservedRunningTime="2026-01-23 14:05:47.575053686 +0000 UTC m=+94.569882426" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.594300 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.59426658 podStartE2EDuration="16.59426658s" podCreationTimestamp="2026-01-23 14:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.575849157 +0000 UTC m=+94.570677897" watchObservedRunningTime="2026-01-23 14:05:47.59426658 +0000 UTC m=+94.589095330" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.669459 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" podStartSLOduration=71.669440622 podStartE2EDuration="1m11.669440622s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.668952149 +0000 UTC m=+94.663780889" watchObservedRunningTime="2026-01-23 14:05:47.669440622 +0000 UTC m=+94.664269362" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.672689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.712986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713494 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.713061 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713568 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.713004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.729240 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:03:24.456861603 +0000 UTC Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.729305 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.753298 4775 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.762452 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.762436371 podStartE2EDuration="1m16.762436371s" podCreationTimestamp="2026-01-23 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.761747673 +0000 UTC m=+94.756576413" watchObservedRunningTime="2026-01-23 14:05:47.762436371 +0000 UTC m=+94.757265111" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.763090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.763083498 podStartE2EDuration="1m14.763083498s" podCreationTimestamp="2026-01-23 14:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.694121043 +0000 UTC m=+94.688949783" watchObservedRunningTime="2026-01-23 14:05:47.763083498 +0000 UTC m=+94.757912238" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.793829 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dwmhf" podStartSLOduration=71.79379008 podStartE2EDuration="1m11.79379008s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.793227165 +0000 UTC m=+94.788055905" watchObservedRunningTime="2026-01-23 14:05:47.79379008 +0000 UTC m=+94.788618820" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.311786 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" event={"ID":"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d","Type":"ContainerStarted","Data":"80307ac3f605396aace0d3d0c7e0cd41138ac2811501716ee208e46be09c238b"} Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.311890 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" event={"ID":"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d","Type":"ContainerStarted","Data":"2595841f56cbb803df15093f0988218f5395d5fbdfa7c2c80d3fc0ddddf2fd3e"} Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.336958 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" podStartSLOduration=72.336924297 podStartE2EDuration="1m12.336924297s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:48.335774527 +0000 UTC m=+95.330603307" watchObservedRunningTime="2026-01-23 14:05:48.336924297 +0000 UTC m=+95.331753087" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.713178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:48 crc kubenswrapper[4775]: E0123 14:05:48.713325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.714154 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:48 crc kubenswrapper[4775]: E0123 14:05:48.714373 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713102 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713282 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713427 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713557 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713695 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:50 crc kubenswrapper[4775]: I0123 14:05:50.712983 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:50 crc kubenswrapper[4775]: E0123 14:05:50.713260 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.713957 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714065 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.714139 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.714174 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714364 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714483 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:52 crc kubenswrapper[4775]: I0123 14:05:52.713597 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:52 crc kubenswrapper[4775]: E0123 14:05:52.713916 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.714184 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.714232 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.716588 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.716628 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.716938 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.717076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:54 crc kubenswrapper[4775]: I0123 14:05:54.687084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.687286 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.687362 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:06:58.687343567 +0000 UTC m=+165.682172307 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:54 crc kubenswrapper[4775]: I0123 14:05:54.713389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.713743 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713480 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713710 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713502 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:56 crc kubenswrapper[4775]: I0123 14:05:56.713971 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:56 crc kubenswrapper[4775]: E0123 14:05:56.714647 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713775 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713779 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.713991 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.714076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.714178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:58 crc kubenswrapper[4775]: I0123 14:05:58.712996 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:58 crc kubenswrapper[4775]: E0123 14:05:58.713116 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.713442 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.713680 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.714441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.714950 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.715269 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:06:00 crc kubenswrapper[4775]: I0123 14:06:00.713553 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:00 crc kubenswrapper[4775]: E0123 14:06:00.714160 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713230 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713230 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713444 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713668 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713850 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713915 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:02 crc kubenswrapper[4775]: I0123 14:06:02.713447 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:02 crc kubenswrapper[4775]: E0123 14:06:02.713658 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.713990 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.714044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.714110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717475 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717041 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717631 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:04 crc kubenswrapper[4775]: I0123 14:06:04.714047 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:04 crc kubenswrapper[4775]: E0123 14:06:04.714239 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713548 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713600 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713713 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713878 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713997 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:06 crc kubenswrapper[4775]: I0123 14:06:06.713347 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:06 crc kubenswrapper[4775]: E0123 14:06:06.713606 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.713368 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.713503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.713545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.713769 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.714867 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.715252 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:08 crc kubenswrapper[4775]: I0123 14:06:08.713936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:08 crc kubenswrapper[4775]: E0123 14:06:08.714144 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.382569 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383303 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383383 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" exitCode=1 Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383442 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058"} Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383520 4775 scope.go:117] "RemoveContainer" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.384203 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.384460 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hpxpf_openshift-multus(ba4447c0-bada-49eb-b6b4-b25feff627a9)\"" pod="openshift-multus/multus-hpxpf" podUID="ba4447c0-bada-49eb-b6b4-b25feff627a9" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713295 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713235 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713351 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:10 crc kubenswrapper[4775]: I0123 14:06:10.389199 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:10 crc kubenswrapper[4775]: I0123 14:06:10.712920 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:10 crc kubenswrapper[4775]: E0123 14:06:10.713075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714130 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714207 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714320 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714311 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714562 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.715472 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.715674 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:06:12 crc kubenswrapper[4775]: I0123 14:06:12.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:12 crc kubenswrapper[4775]: E0123 14:06:12.713267 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.713264 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714475 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.714515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.714565 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714832 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.733737 4775 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.827421 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:14 crc kubenswrapper[4775]: I0123 14:06:14.713789 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:14 crc kubenswrapper[4775]: E0123 14:06:14.714000 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713678 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713768 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713684 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.713964 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.714101 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.714247 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:16 crc kubenswrapper[4775]: I0123 14:06:16.713716 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:16 crc kubenswrapper[4775]: E0123 14:06:16.713933 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713788 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713899 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714057 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714221 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714444 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:18 crc kubenswrapper[4775]: I0123 14:06:18.713668 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:18 crc kubenswrapper[4775]: E0123 14:06:18.713939 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:18 crc kubenswrapper[4775]: E0123 14:06:18.828649 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.716134 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.716241 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.716272 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.716412 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.717031 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.717292 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:20 crc kubenswrapper[4775]: I0123 14:06:20.713894 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:20 crc kubenswrapper[4775]: E0123 14:06:20.714131 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.713398 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.713481 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.713581 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.713995 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.714206 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.714353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:22 crc kubenswrapper[4775]: I0123 14:06:22.712922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:22 crc kubenswrapper[4775]: E0123 14:06:22.713461 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.713494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.713595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.713715 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.717154 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.717485 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.717471 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.717593 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.720150 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.829381 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.441637 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.441758 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510"} Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.444051 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.447921 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.448385 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.489535 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podStartSLOduration=108.489513431 podStartE2EDuration="1m48.489513431s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:24.488469373 +0000 UTC m=+131.483298113" watchObservedRunningTime="2026-01-23 14:06:24.489513431 +0000 UTC m=+131.484342181" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.683223 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.683367 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:24 crc kubenswrapper[4775]: E0123 14:06:24.683491 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714150 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713659 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714410 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714563 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:26 crc kubenswrapper[4775]: I0123 14:06:26.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:26 crc kubenswrapper[4775]: E0123 14:06:26.713913 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713712 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713860 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.713934 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713993 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.714076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.714199 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:28 crc kubenswrapper[4775]: I0123 14:06:28.713864 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:28 crc kubenswrapper[4775]: E0123 14:06:28.714157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713098 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713127 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718318 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718700 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718954 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.719787 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.713955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.717995 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.718797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.226992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.283333 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.284261 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.286748 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.286866 4775 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.286933 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.287784 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288044 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288105 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288485 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288535 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288723 4775 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288787 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.289903 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.289957 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.291103 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.292110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.293643 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.293947 4775 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.293987 4775 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.294020 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.294075 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.294526 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295574 4775 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295644 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295660 4775 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295705 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295779 4775 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295868 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.297917 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.298762 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299390 4775 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299460 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299407 4775 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299518 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299563 4775 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299620 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.299757 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.299909 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.298791 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.300133 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.301341 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.302408 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.304958 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.305435 4775 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.305504 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.305603 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.305917 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.306640 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.306743 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.307532 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.308384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309717 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310102 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310155 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309848 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309991 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310598 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310658 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310711 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310929 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.311658 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.313862 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.314484 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.314997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315089 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315323 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315570 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.318881 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.319495 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.320061 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.324723 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325117 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325215 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325427 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326442 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326653 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326758 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.327225 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329752 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329923 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330092 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329952 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331174 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331373 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331454 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331850 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332139 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332310 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332415 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332474 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332650 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.333031 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.334362 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354054 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354613 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.355537 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.356034 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.356917 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.358893 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.360973 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.361213 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.361301 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.362659 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.364182 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365660 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365705 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365864 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365983 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366099 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366314 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366455 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366474 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366496 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366573 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366617 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366677 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366713 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366784 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366824 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366938 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366994 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367066 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367111 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367155 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367197 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367273 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.368602 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.369788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.370843 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.370988 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.371217 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.371298 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.372503 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.372876 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.374049 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.374241 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.375366 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.376936 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.377862 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.378558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.380780 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381016 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381473 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.382004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381488 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.383202 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.383648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.384955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nj2dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.399458 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.400645 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.401186 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.401362 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.402040 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.402741 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.418822 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.419854 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.420084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.420998 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.421446 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422176 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422331 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422347 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422362 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422414 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422434 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422502 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422517 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422553 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422582 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422599 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422634 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422822 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422870 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422916 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422931 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422947 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422979 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422995 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423030 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423095 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423142 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423160 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423187 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423231 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423247 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423290 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423365 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423381 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423397 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423460 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423521 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423549 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423577 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423608 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423622 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423637 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.427027 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.427884 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428038 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428191 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428613 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.429395 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.430223 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.433426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.433908 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440463 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440507 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.442521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.444096 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448089 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448732 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448897 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.449503 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.450351 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.457324 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.458932 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.459601 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.460657 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462017 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.463271 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.463898 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.464958 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kmqrn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.465689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.467866 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.475646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.477319 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478129 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478519 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478978 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479234 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479401 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479871 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.480765 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.486949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.496430 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.497726 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.499333 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.502409 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.503577 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.504618 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.506049 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.513522 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.513925 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.517514 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.518072 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.521196 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.525013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527398 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526362 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527279 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528459 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528540 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528611 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528754 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529658 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529730 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529816 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529899 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529972 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530041 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530314 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530529 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530670 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530737 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530976 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531047 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531121 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531213 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531487 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531699 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531773 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531876 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532027 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532167 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532304 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532368 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532443 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532511 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532722 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532796 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532914 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532981 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533133 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533197 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533259 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533328 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533623 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533665 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530981 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529695 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534441 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534458 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534551 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534691 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535211 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535847 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536261 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536680 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536888 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537010 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537078 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537118 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537649 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537665 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538084 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538586 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538731 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539333 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539345 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539409 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539435 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539463 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539528 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540109 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540428 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540509 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541004 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541398 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541460 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541553 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542300 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542456 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542461 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543029 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543305 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543953 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.544077 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.544323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.545357 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.545658 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.546326 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.546408 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.547425 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.548442 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.549431 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.550513 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.550934 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.551482 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.551544 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.552549 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.553529 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.554538 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.555588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.559362 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.567146 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.586554 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.607178 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.626874 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.639949 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.639992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640018 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640062 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640612 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640847 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640931 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641217 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641269 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641332 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641398 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641439 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641490 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.643570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.646320 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.646878 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.647575 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.649525 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.656917 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.667284 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.675112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.687086 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.695206 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.707509 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.727696 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.735137 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.746656 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.767276 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.775784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.787946 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.798599 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.808151 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.827168 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.847596 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.866712 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.887156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.897355 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.914999 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.925679 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.926598 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.946883 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.967546 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.986443 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.006954 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.026352 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.047129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.067162 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.088189 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.106291 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.126708 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.146323 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.167173 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.187322 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.206497 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.227746 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.247146 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.267169 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.287457 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.327379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.346873 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.366199 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.386484 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.426868 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.444889 4775 request.go:700] Waited for 1.003387183s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&limit=500&resourceVersion=0 Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.446742 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.467864 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.487117 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.507127 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.526386 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.530060 4775 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.530156 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.030127591 +0000 UTC m=+147.024956361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.533500 4775 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.533616 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.033586934 +0000 UTC m=+147.028415704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.534791 4775 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.534903 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.034877973 +0000 UTC m=+147.029706803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543350 4775 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543367 4775 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543406 4775 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543419 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.043401937 +0000 UTC m=+147.038230697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543490 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.043470889 +0000 UTC m=+147.038299659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543512 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.04350108 +0000 UTC m=+147.038329850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.547586 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.565837 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.587306 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.606001 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.626937 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.647537 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.665995 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.686594 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.706478 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.726265 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.747423 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758493 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.758618 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:08:41.758589884 +0000 UTC m=+268.753418664 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758793 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758891 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758923 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.759780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.762477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.764156 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.764628 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.766600 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.787212 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.807325 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.826750 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.847118 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.866868 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.886988 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.906414 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.927691 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.938665 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.947301 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.958637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.966877 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.967976 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.987345 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.009390 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.028903 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.048196 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098717 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098988 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.100046 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.102203 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.106326 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.127573 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.147745 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.166629 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.186126 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.212192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: W0123 14:06:40.215921 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed WatchSource:0}: Error finding container 8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed: Status 404 returned error can't find the container with id 8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.247786 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.259299 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.260785 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.277616 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.279793 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.302074 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.322696 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.385857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.439439 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.444367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.464976 4775 request.go:700] Waited for 1.924366567s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.477935 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.480244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:40 crc kubenswrapper[4775]: W0123 14:06:40.484164 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3066d31d_92a4_45a7_b368_ba66d5689456.slice/crio-74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535 WatchSource:0}: Error finding container 74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535: Status 404 returned error can't find the container with id 74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535 Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.486950 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.500891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.506310 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.520981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75a4a1a4529a6e632b8fa862424543e4609219da0d81806f206e32abd5cd95fb"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.522317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"18810222cbc1a0dc699884a78e50885f0c7718049d60a9ccfa905497d5b065d8"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.523591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerStarted","Data":"74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.524886 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.526206 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.536128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.547200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.552237 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.565831 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.569958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.586841 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.604003 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.606268 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.617704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.653964 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.686358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.706366 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.715185 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.733893 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.734741 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.745720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.746597 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.774302 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.780764 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.786990 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.790087 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.806306 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.847067 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.866983 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.887034 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.926778 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.937327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.947032 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.956292 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.974702 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.983875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.986618 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.991551 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.999653 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.008699 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.010446 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.026839 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.030202 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.047126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.054918 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.167910 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.169219 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170980 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171019 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170894 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171469 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171906 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.172018 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.172472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.173073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.673059185 +0000 UTC m=+148.667887935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173548 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173660 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173699 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.195873 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276473 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.276721 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.77668164 +0000 UTC m=+148.771510400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276920 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276987 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277003 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277021 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277037 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277215 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277241 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.278885 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.279021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.279080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.280982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282266 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282431 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282517 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282920 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282998 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283331 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283400 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283905 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.284340 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.784312798 +0000 UTC m=+148.779141548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.288799 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.289666 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.291883 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.292786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.293048 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.293383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.295428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.295511 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296251 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296489 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296907 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.297892 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298063 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.307202 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.332177 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.368731 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: W0123 14:06:41.372662 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216b36e4_0e40_4073_9432_d1977dc6e03a.slice/crio-48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a WatchSource:0}: Error finding container 48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a: Status 404 returned error can't find the container with id 48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408060 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408193 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408229 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408324 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408350 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408395 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408418 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408439 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408576 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408601 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408748 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408774 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408828 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408913 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408929 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408982 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408998 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409015 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409039 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409054 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409071 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409135 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409161 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409177 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409232 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409249 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409317 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409335 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409376 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409410 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409426 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409464 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409478 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409494 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409522 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409538 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409553 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409576 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409591 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409629 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409705 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409723 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409750 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409797 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409838 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409853 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409888 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409937 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409972 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410111 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.410231 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.910213088 +0000 UTC m=+148.905041828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.413448 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.416160 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.416519 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417444 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417739 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417764 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.418294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.418719 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.419006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.420570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.420850 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.421145 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.421955 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.422247 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.422328 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.423109 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.423287 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.428364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.428660 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.432968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.440485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.440769 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.443411 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.446957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.465594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.480520 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.494750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.505526 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.511656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512407 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512464 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512490 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512509 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512595 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512624 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512663 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512769 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512793 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.513513 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514672 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514775 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514795 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514879 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514904 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515076 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515133 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515151 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515194 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515212 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515231 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515254 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515275 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515341 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515362 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515401 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515418 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515433 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516549 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.516554 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.016540193 +0000 UTC m=+149.011368933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.517662 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.517910 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.518417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.518427 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519036 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.521373 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.521995 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522610 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522704 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.523463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.523498 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.525275 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.525319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.526311 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.529019 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.532321 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533340 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533560 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.534388 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.542512 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"bb145b8c8f1d9c65d19d51f3fc510aab7854b83b2164cb5cb8f17aa62cb2de6b"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545395 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545533 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0f24ea82f10e41d59727dab54387a8dc961e3bac03585b6673fa010f71e431ce"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545700 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.558667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc3404126e9619f10821b8e85b5f5bbeb0506f42b79c1742b2e37d0e6f7014f5"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.561462 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.565485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.575880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"57beb13a8a81752da92a78b6f82384b2c9ba4c377404b875c31ea5a59e72cd20"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.587167 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.594596 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.611855 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.619200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.620116 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.120096146 +0000 UTC m=+149.114924886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.626310 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.646193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.657152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.666971 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.685872 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.692601 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.702952 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.708263 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.717200 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.720408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.721218 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.221199825 +0000 UTC m=+149.216028555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.721928 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.735594 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.739367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.763679 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.777344 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.784152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.794722 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.800648 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.804522 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.822834 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.823331 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.323300894 +0000 UTC m=+149.318129634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.823241 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.824738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.825529 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.32551045 +0000 UTC m=+149.320339190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.826159 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.835871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.841217 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.848788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.860837 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.874190 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.878413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.881497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.892299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.903584 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.906759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.937425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.938182 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.438158254 +0000 UTC m=+149.432986994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.028681 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.038836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.039254 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.539238703 +0000 UTC m=+149.534067443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.046558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.056059 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.063170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.083618 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.085595 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c7da5e_ce0a_4a3c_9419_420f63f93f0e.slice/crio-d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4 WatchSource:0}: Error finding container d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4: Status 404 returned error can't find the container with id d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4 Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.120555 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.132694 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.134811 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.136526 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.140730 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.140963 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.6409483 +0000 UTC m=+149.635777030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.141026 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.141250 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.641243429 +0000 UTC m=+149.636072159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.238080 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6b05de_2295_4c6a_8f11_367da8bdcf00.slice/crio-7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6 WatchSource:0}: Error finding container 7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6: Status 404 returned error can't find the container with id 7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6 Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.240279 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ba1b8ce_8332_45c9_bfb0_9a1842dea009.slice/crio-18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648 WatchSource:0}: Error finding container 18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648: Status 404 returned error can't find the container with id 18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648 Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.241908 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.242310 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.742288797 +0000 UTC m=+149.737117547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.345455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.345826 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.845796568 +0000 UTC m=+149.840625308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.453873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.454078 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.954064451 +0000 UTC m=+149.948893191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.556568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.559498 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.059483599 +0000 UTC m=+150.054312339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.580199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"f94dbbf973341affd2538b3cbc434d7ad4fd81a5deaa125003de6b72b9911054"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.580247 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"5d0a45f59c61e2597b3665b4093c4210d42c5109a22dae499d34d976d5711df6"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.586372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" event={"ID":"cc6b05de-2295-4c6a-8f11-367da8bdcf00","Type":"ContainerStarted","Data":"7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.593366 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" event={"ID":"f38f7554-61cc-493f-8705-8da5f91d3926","Type":"ContainerStarted","Data":"f894849e32473455829c7b40b313941acc7320d102611e6c3ac59b3ced619c0a"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.593411 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" event={"ID":"f38f7554-61cc-493f-8705-8da5f91d3926","Type":"ContainerStarted","Data":"5b0d03f1967a283269994ccf5b1a0dd0a5943d80f6fb4af0aca71021f9919b58"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.596547 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" event={"ID":"13e16abe-9325-4638-8b20-7195b7af8e68","Type":"ContainerStarted","Data":"33584a8eb34271542645243cdb07bed0b6f45ebf9838dc8cddc231e46573e88b"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.605814 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmqrn" event={"ID":"09c7da5e-ce0a-4a3c-9419-420f63f93f0e","Type":"ContainerStarted","Data":"fea98ee07609f5547e776eb1de5619607f5763636cff987d5948eb9e56e2b31b"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.605869 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmqrn" event={"ID":"09c7da5e-ce0a-4a3c-9419-420f63f93f0e","Type":"ContainerStarted","Data":"d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.627915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerStarted","Data":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.628988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.648617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"8f41307c72e8249a81cfaf681b76ff654ea059bcc3caa121afee462f92bd4f8f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.648703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"c6ba06eea1caea63b9c1618708d0058fd8c2a9908da60ea31992e6ace6fc83d0"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.659879 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvqcg" event={"ID":"8ba1b8ce-8332-45c9-bfb0-9a1842dea009","Type":"ContainerStarted","Data":"5750ec86d3204a228dcf0783fbf4c9551f8adee39c18a43ac4f08c4129127cdd"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.659928 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvqcg" event={"ID":"8ba1b8ce-8332-45c9-bfb0-9a1842dea009","Type":"ContainerStarted","Data":"18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.660923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.661458 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.663352 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.16331911 +0000 UTC m=+150.158147850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.695576 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.695632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.697930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nj2dd" event={"ID":"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526","Type":"ContainerStarted","Data":"d03c22578342edac43ef977c92563c16d4c79bf02047a1d0aeeeb99dc0d0b938"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.697985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nj2dd" event={"ID":"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526","Type":"ContainerStarted","Data":"253ebed9b658348ffac0701fabd0fcd44d419d9faec3b842712d6efafb3de24f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.764262 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.767136 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.26711923 +0000 UTC m=+150.261947980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.865041 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.865169 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.365142157 +0000 UTC m=+150.359970897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.865282 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.865576 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.36556517 +0000 UTC m=+150.360394020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.968330 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.968795 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.468775111 +0000 UTC m=+150.463603861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.980536 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.980696 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.985399 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.990893 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6995952d_6d8a_494d_842c_1d5cf9ee1207.slice/crio-7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f WatchSource:0}: Error finding container 7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f: Status 404 returned error can't find the container with id 7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.993695 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.011316 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.038653 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.052990 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.071600 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.072002 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.571986683 +0000 UTC m=+150.566815423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.072833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.087492 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.110332 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.160891 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173269 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.173582 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.673566397 +0000 UTC m=+150.668395137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173716 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173741 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.174889 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.177054 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.186703 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.193221 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1680ee1_e1af_4c87_b9d9_d29e2b0a5043.slice/crio-c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94 WatchSource:0}: Error finding container c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94: Status 404 returned error can't find the container with id c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94 Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.223785 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6b6f17_bb56_49ba_8487_6e07346780a1.slice/crio-87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737 WatchSource:0}: Error finding container 87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737: Status 404 returned error can't find the container with id 87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737 Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.272260 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.276189 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.276464 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.77645303 +0000 UTC m=+150.771281770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.278453 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.320127 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.336561 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.356381 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.359661 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaac7553_88f9_49bd_811f_e993ad0cd40d.slice/crio-5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e WatchSource:0}: Error finding container 5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e: Status 404 returned error can't find the container with id 5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.361454 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.372102 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5dfee7e_59a9_43b1_bd2e_f3200ea5322c.slice/crio-321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc WatchSource:0}: Error finding container 321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc: Status 404 returned error can't find the container with id 321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.373210 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.377040 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.377275 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.87725942 +0000 UTC m=+150.872088160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.382095 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kmqrn" podStartSLOduration=5.382076894 podStartE2EDuration="5.382076894s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.369436956 +0000 UTC m=+150.364265696" watchObservedRunningTime="2026-01-23 14:06:43.382076894 +0000 UTC m=+150.376905634" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.382584 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.411883 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" podStartSLOduration=127.411864933 podStartE2EDuration="2m7.411864933s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.410264546 +0000 UTC m=+150.405093286" watchObservedRunningTime="2026-01-23 14:06:43.411864933 +0000 UTC m=+150.406693673" Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.436488 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5d381d_3a9d_4ba4_85fb_e9008e359729.slice/crio-e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9 WatchSource:0}: Error finding container e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9: Status 404 returned error can't find the container with id e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9 Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.455971 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.478359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.478648 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.978635537 +0000 UTC m=+150.973464277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.554139 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nj2dd" podStartSLOduration=127.554119942 podStartE2EDuration="2m7.554119942s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.550781002 +0000 UTC m=+150.545609742" watchObservedRunningTime="2026-01-23 14:06:43.554119942 +0000 UTC m=+150.548948682" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.581778 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.582191 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.08217604 +0000 UTC m=+151.077004780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.587708 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" podStartSLOduration=127.587690564 podStartE2EDuration="2m7.587690564s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.582466028 +0000 UTC m=+150.577294768" watchObservedRunningTime="2026-01-23 14:06:43.587690564 +0000 UTC m=+150.582519294" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.587930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.597223 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.619356 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.623552 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mvqcg" podStartSLOduration=127.623511244 podStartE2EDuration="2m7.623511244s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.609152665 +0000 UTC m=+150.603981405" watchObservedRunningTime="2026-01-23 14:06:43.623511244 +0000 UTC m=+150.618339994" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.639488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.646792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.656051 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" podStartSLOduration=127.656009374 podStartE2EDuration="2m7.656009374s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.631817132 +0000 UTC m=+150.626645882" watchObservedRunningTime="2026-01-23 14:06:43.656009374 +0000 UTC m=+150.650838114" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.671959 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:43 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:43 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:43 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.672023 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.674495 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" podStartSLOduration=127.674477276 podStartE2EDuration="2m7.674477276s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.673145896 +0000 UTC m=+150.667974636" watchObservedRunningTime="2026-01-23 14:06:43.674477276 +0000 UTC m=+150.669306016" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.682879 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.687928 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.187910067 +0000 UTC m=+151.182738807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.691194 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.719931 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.781479 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.784135 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.784476 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.784758 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.284737269 +0000 UTC m=+151.279566009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.790452 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" event={"ID":"4304b2e3-9359-4caf-94dd-1e31716fee56","Type":"ContainerStarted","Data":"7ace852458ef3812ba419dd36db84c561e9ef2c87135369e7ac066c596e8234d"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.792878 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" event={"ID":"384fd47a-81d2-4219-8a66-fbeec5bae860","Type":"ContainerStarted","Data":"639ace8e745ba46a24dfc49b1c972ca70c3d991b11b131d7fe7adac9a53d8663"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.808690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerStarted","Data":"783601f2459ac0ca5a884fc5bd0420d2b8d2891d3547354b43ae23ef08178d0c"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.817384 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"f86a0f102d54f6d33929d8d65d55921f6232664bd9afa670a151241e47d9a59e"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.817438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"5b9f2f71532f723f9a1eaffdd6ca3478934eb1a9d3de769264a6371bf8165faa"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.824314 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerStarted","Data":"87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.833126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"3b278d30f0743f4dd565a6dbd3d122f825348b8dc8e4879597d6448c00e8fa52"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.834290 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" event={"ID":"549e54fa-53eb-4a9d-9578-5cfbd02bb28d","Type":"ContainerStarted","Data":"6530997e5286a83d4549d5bf7514360e30ecb9d8d39bfd63a0f6c277f500f34c"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.834308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" event={"ID":"549e54fa-53eb-4a9d-9578-5cfbd02bb28d","Type":"ContainerStarted","Data":"886b09d921c71643ce311527caaa30a8c2770755e19f3690687807ae42b0c192"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.835614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.837780 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.837826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"b9c62366623dfd8a2b8c616f1243b39a22f93f3e949fbe7e19d4fed3faa7b230"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.849495 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"a25b9ed2fe7fe9b82a067f014d9b57d4d05a554e8f8f383aecd06916c3d9fbc7"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.853441 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"62636a249346418acceb0e0644d57d1425845da4d462082b136a17efa80927bf"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.860115 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" event={"ID":"0f5d381d-3a9d-4ba4-85fb-e9008e359729","Type":"ContainerStarted","Data":"e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.868564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" event={"ID":"f73c288c-acf3-4ce7-81c7-63953b2fc087","Type":"ContainerStarted","Data":"477ef4f6069365949577cf970eccbfb2d9b7d3ef16917b0e154e5b65def390c9"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.872778 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" event={"ID":"8a2f579b-0f13-47dd-9566-dd57100ab22a","Type":"ContainerStarted","Data":"ebabb4649f38ec987dc44cce66ab98605c75a09267b77e56bccdac83686fdf45"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.880813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.886252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.886579 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.3865664 +0000 UTC m=+151.381395140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.887284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5nll" event={"ID":"924bd720-98da-4f7b-afbc-a7bfa822368f","Type":"ContainerStarted","Data":"162fa534077513c17bd67067ada7c374dd8a402911c1d8ec09f73ab9b9ad96ab"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.894651 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" event={"ID":"2ad73212-43f6-49db-a38b-678185cbe9d4","Type":"ContainerStarted","Data":"27385bef26a71fabd2cee54731985e6e9f6add72ddba47686bab731ae015c209"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.910054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" event={"ID":"cc6b05de-2295-4c6a-8f11-367da8bdcf00","Type":"ContainerStarted","Data":"554f386f9ea3a922d1075f75cb987f87980f014544d89c8d19624362f1eb02ce"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.917073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"0d80e6b67e9f1268fd378b0d81dd590d53cf6773ef5207730afe777426e70b8d"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.932161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" event={"ID":"a925ae96-5ea9-4dba-9fbf-2ec5f5295026","Type":"ContainerStarted","Data":"f40e82999acc6a1c646bbd5afcf2ea2228d735f9af02314ff6b7711368b69212"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.938685 4775 csr.go:261] certificate signing request csr-zrpmh is approved, waiting to be issued Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.945392 4775 csr.go:257] certificate signing request csr-zrpmh is issued Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.953911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" event={"ID":"ba896a24-e6f2-4480-807b-b3c5b6232cea","Type":"ContainerStarted","Data":"f771e3a0545124f9aff5df19948a12cf28d603c121f483e5eeca5e318e63c454"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.954265 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.957957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerStarted","Data":"1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.958046 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerStarted","Data":"c804f2807463870f94ca39d16cb9e5b2566a2fdc9148b1292a1636387b79edff"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.958837 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.960071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"eb3f4de1b3b2b446a6c1c5c2b453d25bb2f07abbbaa38dc068fe9c80edd97628"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.961753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerStarted","Data":"f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.961782 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerStarted","Data":"14f4d6283aff6de605f724a865763d27a0a448211bbacd5d102fb5562e6f44ef"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.962492 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963741 4775 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gqzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963825 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podUID="ba896a24-e6f2-4480-807b-b3c5b6232cea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963894 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2bx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963925 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.964523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerStarted","Data":"ef54fd5e26cacb272f1e1be9cfe28c0c931df15d597bb7da81a47734c646362b"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.966172 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.966226 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.969641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" event={"ID":"13e16abe-9325-4638-8b20-7195b7af8e68","Type":"ContainerStarted","Data":"cfdaa792dd53dd9618e6fd6cc1a7572ca8ac417bc5bfaedaf956ce88910394a2"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.973320 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" event={"ID":"6995952d-6d8a-494d-842c-1d5cf9ee1207","Type":"ContainerStarted","Data":"78954533eb3567fbf192793631faefaf0c03ec3d3d29c16c6d6c22000f8c91f2"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.973353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" event={"ID":"6995952d-6d8a-494d-842c-1d5cf9ee1207","Type":"ContainerStarted","Data":"7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.979149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerStarted","Data":"126d7f9344248499833b2fa9bffa79374396f9b7ca1fc1c07f0f0a3674655194"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.987138 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.987517 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.487493544 +0000 UTC m=+151.482322284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.987857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.992377 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.492356499 +0000 UTC m=+151.487185239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94"} Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001651 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001692 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.093617 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.096445 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.596425347 +0000 UTC m=+151.591254247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.198156 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.198500 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.698487415 +0000 UTC m=+151.693316155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.300134 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.300841 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.800823721 +0000 UTC m=+151.795652461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.401967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.406150 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.906130056 +0000 UTC m=+151.900958796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.503418 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.503767 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.003746201 +0000 UTC m=+151.998574941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.605353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.605977 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.105960994 +0000 UTC m=+152.100789734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.659007 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:44 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:44 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:44 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.659049 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.710634 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.710955 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.210939119 +0000 UTC m=+152.205767859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.711116 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.711561 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.211553147 +0000 UTC m=+152.206381887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.811736 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.812052 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.312033158 +0000 UTC m=+152.306861898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.907643 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" podStartSLOduration=128.907627652 podStartE2EDuration="2m8.907627652s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.904730336 +0000 UTC m=+151.899559076" watchObservedRunningTime="2026-01-23 14:06:44.907627652 +0000 UTC m=+151.902456392" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.912653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.913007 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.412993193 +0000 UTC m=+152.407821933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.948352 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 14:01:43 +0000 UTC, rotation deadline is 2026-11-22 05:07:29.151809769 +0000 UTC Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.948432 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7263h0m44.203380448s for next certificate rotation Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.962982 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podStartSLOduration=128.962965575 podStartE2EDuration="2m8.962965575s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.960628075 +0000 UTC m=+151.955456825" watchObservedRunningTime="2026-01-23 14:06:44.962965575 +0000 UTC m=+151.957794315" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.021298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.021634 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.521619007 +0000 UTC m=+152.516447747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.032951 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" podStartSLOduration=129.032936805 podStartE2EDuration="2m9.032936805s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.985194899 +0000 UTC m=+151.980023639" watchObservedRunningTime="2026-01-23 14:06:45.032936805 +0000 UTC m=+152.027765545" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.033195 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podStartSLOduration=129.033191022 podStartE2EDuration="2m9.033191022s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.030253944 +0000 UTC m=+152.025082684" watchObservedRunningTime="2026-01-23 14:06:45.033191022 +0000 UTC m=+152.028019762" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.040829 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" event={"ID":"88ecf2d3-bdec-4fe8-a567-44550e85bb19","Type":"ContainerStarted","Data":"be68934ceedd350fb3fd62ed0d8974ea06e81205df9940c9174d70e523757526"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.040867 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" event={"ID":"88ecf2d3-bdec-4fe8-a567-44550e85bb19","Type":"ContainerStarted","Data":"d22527b7f6abd399b4e5da0e46745fb66ca5922f663f2fb29fa0c5b9c706c45a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.049268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" event={"ID":"d7707d7a-bfb7-4600-98f4-be607d9e77f4","Type":"ContainerStarted","Data":"573d26feaa63963c83105249d1b5ec9688369844c41c6889cdddd83542400533"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.049308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" event={"ID":"d7707d7a-bfb7-4600-98f4-be607d9e77f4","Type":"ContainerStarted","Data":"cbf05869c4786c2bfe2dac571e3097dd06a662629a0eb18f501f6aefb8de7f66"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.050244 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.061644 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rfbk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.061706 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podUID="d7707d7a-bfb7-4600-98f4-be607d9e77f4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.062178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"eab78b0bfbb1e2cebe092155c9772fd6a290b1c5a307cf780d4c398326697ba1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079017 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" event={"ID":"ba896a24-e6f2-4480-807b-b3c5b6232cea","Type":"ContainerStarted","Data":"0357894b6772ffb91ddd27f777486f5eb9f0d86be383f7b3a92aa98b9889165c"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079723 4775 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gqzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079749 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podUID="ba896a24-e6f2-4480-807b-b3c5b6232cea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.104468 4775 generic.go:334] "Generic (PLEG): container finished" podID="f9750de6-fc79-440e-8ad4-07acbe4edb49" containerID="200b0617b8f8a4369cb8cc24a748ac72dde52270d577d79cddfd4b0d1ba88c77" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.104535 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerDied","Data":"200b0617b8f8a4369cb8cc24a748ac72dde52270d577d79cddfd4b0d1ba88c77"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.111080 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" podStartSLOduration=129.111065918 podStartE2EDuration="2m9.111065918s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.067832117 +0000 UTC m=+152.062660857" watchObservedRunningTime="2026-01-23 14:06:45.111065918 +0000 UTC m=+152.105894658" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.112070 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podStartSLOduration=129.112064828 podStartE2EDuration="2m9.112064828s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.111312345 +0000 UTC m=+152.106141085" watchObservedRunningTime="2026-01-23 14:06:45.112064828 +0000 UTC m=+152.106893568" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.113937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"ac7d274e3b76addeb461e88a6e38422c130dcf74dc86a462564f79ccc6d24226"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.122450 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.122798 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.622783628 +0000 UTC m=+152.617612368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.127828 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"0a16f0a2d7252e8f056fd9b2124ef0f9eb64800c7655173b0553fc20d85d036e"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.128097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"e87a3cf924507e263fdf6fc6380f43067923478788b82dfed3f254fe4119ec0f"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.137720 4775 generic.go:334] "Generic (PLEG): container finished" podID="c7fae259-48f4-4d23-8685-6440a5246423" containerID="004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.137852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerDied","Data":"004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.142019 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" podStartSLOduration=129.141998302 podStartE2EDuration="2m9.141998302s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.140520287 +0000 UTC m=+152.135349027" watchObservedRunningTime="2026-01-23 14:06:45.141998302 +0000 UTC m=+152.136827042" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.178616 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" event={"ID":"384fd47a-81d2-4219-8a66-fbeec5bae860","Type":"ContainerStarted","Data":"5a4ee699255124b6c9de62ae3f6c63385a8d2c91c5782187dbc607d0534d6ca1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.179385 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184775 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"e7bd6df12f07013d959edd287e01e0630e48c8a8430cefe74f71eac69a37ec9e"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"79f891f2deda6d5c49955de71cdbf747e5162a9a62e11d7fb5d0d1490dd98771"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184982 4775 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-65w5f container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.185027 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" podUID="384fd47a-81d2-4219-8a66-fbeec5bae860" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.189864 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podStartSLOduration=129.189794779 podStartE2EDuration="2m9.189794779s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.183441869 +0000 UTC m=+152.178270609" watchObservedRunningTime="2026-01-23 14:06:45.189794779 +0000 UTC m=+152.184623519" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.219332 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5nll" event={"ID":"924bd720-98da-4f7b-afbc-a7bfa822368f","Type":"ContainerStarted","Data":"212dd351f81a598a8bd5dbf1dbca4465a9c1ebafd52cd7baffb3ad9b770b3a5a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.223499 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.224613 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.724597738 +0000 UTC m=+152.719426478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.225416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerStarted","Data":"bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.242303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"b3efc1f7717b5f270092d41d50747303298de4a30f116a9a55470729a7b0e1e9"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.259301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerStarted","Data":"0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.260190 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.262642 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" podStartSLOduration=129.262622994 podStartE2EDuration="2m9.262622994s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.259849971 +0000 UTC m=+152.254678711" watchObservedRunningTime="2026-01-23 14:06:45.262622994 +0000 UTC m=+152.257451734" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.265110 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.265177 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.280843 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerStarted","Data":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.283348 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" event={"ID":"8a2f579b-0f13-47dd-9566-dd57100ab22a","Type":"ContainerStarted","Data":"aca20b59d669fc8018e913059e0ac8734aa8244d21a04ce0eb4a91fadfbe1e4b"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.310404 4775 generic.go:334] "Generic (PLEG): container finished" podID="c575b767-e334-406f-849d-e562d70985fd" containerID="4ad8d1efee4bf79acffb5d566a3c125a4291d13446cc6a0749f1d14599861de0" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.310487 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerDied","Data":"4ad8d1efee4bf79acffb5d566a3c125a4291d13446cc6a0749f1d14599861de0"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.316243 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" podStartSLOduration=129.316224105 podStartE2EDuration="2m9.316224105s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.310268607 +0000 UTC m=+152.305097347" watchObservedRunningTime="2026-01-23 14:06:45.316224105 +0000 UTC m=+152.311052845" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.337820 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.345758 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.845743206 +0000 UTC m=+152.840571946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.353280 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" podStartSLOduration=129.353266251 podStartE2EDuration="2m9.353266251s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.351703774 +0000 UTC m=+152.346532584" watchObservedRunningTime="2026-01-23 14:06:45.353266251 +0000 UTC m=+152.348094991" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.381448 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"4163a4edaf0e033d5e364222a1a64ac6ccb4ad9e12f8f91e0be95e4a44ff6a02"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.381779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"483a051e7d1d0603d8c840c5df82a09845654c9edf0f450f8cfd1a455e73da79"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.386616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podStartSLOduration=129.386596546 podStartE2EDuration="2m9.386596546s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.38203399 +0000 UTC m=+152.376862730" watchObservedRunningTime="2026-01-23 14:06:45.386596546 +0000 UTC m=+152.381425286" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.387184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" event={"ID":"0f5d381d-3a9d-4ba4-85fb-e9008e359729","Type":"ContainerStarted","Data":"ef2a13aaab2e42e13f0a0b5435cc4064458a0380affaecdebb1304d206440206"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.395926 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" event={"ID":"90f0ee56-8c51-4a42-ae4e-385ff7453aa7","Type":"ContainerStarted","Data":"5755f1499d835cc80a9aa7263bf7fd543392d67a56715ef8a8aab24dfb9da1b1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.404255 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"f274109f57aead4df1f63e3ed82694af2d2f87335f1d763e0cf3d8d62b18a663"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.404867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.409552 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" event={"ID":"f73c288c-acf3-4ce7-81c7-63953b2fc087","Type":"ContainerStarted","Data":"fb6dfd5b49967c52406c64c6cd631e5d024adc2620f6107d8c183146c200f957"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.412578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" event={"ID":"4304b2e3-9359-4caf-94dd-1e31716fee56","Type":"ContainerStarted","Data":"571218eaeef0e770b801d198a59edd6b660899d8cc32b4fdd85615dd021aa7c6"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.412991 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.414917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" event={"ID":"a925ae96-5ea9-4dba-9fbf-2ec5f5295026","Type":"ContainerStarted","Data":"fb6178069a1a5eb139f802b984fc975874815dfe021944ef084a0e32cd5f3079"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.416048 4775 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2vnwm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.416088 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" podUID="4304b2e3-9359-4caf-94dd-1e31716fee56" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.422015 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" podStartSLOduration=129.422003424 podStartE2EDuration="2m9.422003424s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.420380115 +0000 UTC m=+152.415208855" watchObservedRunningTime="2026-01-23 14:06:45.422003424 +0000 UTC m=+152.416832164" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.428060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" event={"ID":"2ad73212-43f6-49db-a38b-678185cbe9d4","Type":"ContainerStarted","Data":"f69e0599ca69e4d11df671bc24e8330f92e97bbe5a740bbbde353b49ed3b0a4a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.447168 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.449102 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.949066502 +0000 UTC m=+152.943895242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.465010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"8c3c2436f2346ccfaa7c4d3c6c3378201bba5c8cabf7e5ee34c3cb1dd7676b40"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469454 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469496 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2bx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469454 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469519 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469494 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469518 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.470088 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" podStartSLOduration=129.470067649 podStartE2EDuration="2m9.470067649s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.463121001 +0000 UTC m=+152.457949741" watchObservedRunningTime="2026-01-23 14:06:45.470067649 +0000 UTC m=+152.464896389" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.544134 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" podStartSLOduration=129.54411679 podStartE2EDuration="2m9.54411679s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.511106584 +0000 UTC m=+152.505935324" watchObservedRunningTime="2026-01-23 14:06:45.54411679 +0000 UTC m=+152.538945530" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.549873 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.554607 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.054595733 +0000 UTC m=+153.049424463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.596518 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" podStartSLOduration=129.596489224 podStartE2EDuration="2m9.596489224s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.595840515 +0000 UTC m=+152.590669255" watchObservedRunningTime="2026-01-23 14:06:45.596489224 +0000 UTC m=+152.591317964" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.629438 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m5nll" podStartSLOduration=7.629392447 podStartE2EDuration="7.629392447s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.628724927 +0000 UTC m=+152.623553667" watchObservedRunningTime="2026-01-23 14:06:45.629392447 +0000 UTC m=+152.624221187" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.652573 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.652970 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.152955741 +0000 UTC m=+153.147784471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.661168 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:45 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:45 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:45 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.661215 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.676620 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fgb82" podStartSLOduration=129.676603027 podStartE2EDuration="2m9.676603027s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.6740205 +0000 UTC m=+152.668849240" watchObservedRunningTime="2026-01-23 14:06:45.676603027 +0000 UTC m=+152.671431767" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.709490 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" podStartSLOduration=129.709476519 podStartE2EDuration="2m9.709476519s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.707954123 +0000 UTC m=+152.702782873" watchObservedRunningTime="2026-01-23 14:06:45.709476519 +0000 UTC m=+152.704305259" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.741207 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" podStartSLOduration=129.741188916 podStartE2EDuration="2m9.741188916s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.741152654 +0000 UTC m=+152.735981394" watchObservedRunningTime="2026-01-23 14:06:45.741188916 +0000 UTC m=+152.736017656" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.756079 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.756562 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.256540364 +0000 UTC m=+153.251369104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.779004 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" podStartSLOduration=129.778980274 podStartE2EDuration="2m9.778980274s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.777124479 +0000 UTC m=+152.771953239" watchObservedRunningTime="2026-01-23 14:06:45.778980274 +0000 UTC m=+152.773809014" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.825104 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" podStartSLOduration=129.825087591 podStartE2EDuration="2m9.825087591s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.822770322 +0000 UTC m=+152.817599062" watchObservedRunningTime="2026-01-23 14:06:45.825087591 +0000 UTC m=+152.819916331" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.857493 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.858087 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.358070956 +0000 UTC m=+153.352899686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.868146 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" podStartSLOduration=129.868108366 podStartE2EDuration="2m9.868108366s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.866134187 +0000 UTC m=+152.860962927" watchObservedRunningTime="2026-01-23 14:06:45.868108366 +0000 UTC m=+152.862937096" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.959568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.959899 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.459887597 +0000 UTC m=+153.454716337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.977344 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" podStartSLOduration=129.977323037 podStartE2EDuration="2m9.977323037s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.974232975 +0000 UTC m=+152.969061725" watchObservedRunningTime="2026-01-23 14:06:45.977323037 +0000 UTC m=+152.972151777" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.977865 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" podStartSLOduration=129.977857193 podStartE2EDuration="2m9.977857193s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.920615564 +0000 UTC m=+152.915444294" watchObservedRunningTime="2026-01-23 14:06:45.977857193 +0000 UTC m=+152.972685933" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.050376 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" podStartSLOduration=130.050362119 podStartE2EDuration="2m10.050362119s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.049500903 +0000 UTC m=+153.044329643" watchObservedRunningTime="2026-01-23 14:06:46.050362119 +0000 UTC m=+153.045190859" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.051521 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" podStartSLOduration=130.051512393 podStartE2EDuration="2m10.051512393s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.017290481 +0000 UTC m=+153.012119221" watchObservedRunningTime="2026-01-23 14:06:46.051512393 +0000 UTC m=+153.046341133" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.062325 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.062779 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.562759399 +0000 UTC m=+153.557588139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.076735 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" podStartSLOduration=130.076718156 podStartE2EDuration="2m10.076718156s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.072883191 +0000 UTC m=+153.067711931" watchObservedRunningTime="2026-01-23 14:06:46.076718156 +0000 UTC m=+153.071546886" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.164226 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.164585 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.664568229 +0000 UTC m=+153.659396969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.265203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.265576 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.765549265 +0000 UTC m=+153.760378005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.366794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.367239 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.867219121 +0000 UTC m=+153.862047951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.467631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.467947 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.967920538 +0000 UTC m=+153.962749278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.481280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"d7f784e0b78ae154a92ea6388c382889297f22e67565ca63df3c3516d80a564d"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.481549 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.484429 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"9207f4cc20df8acb8c3112286787b29fed970bb4df13c6df0c8107bf4ff986a5"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.486795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"a9be1eb821096367941d651ad0fbaff1e3e70493d07ab58ce51929d49783e20c"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.486833 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"85f9026711c001686802c646a663af3e3985c550e6eb233b0ef2642f09febb26"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.489626 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"e0170441bae0e68b6e1dc341a7d7696cabd456807615f2fb845fd149c49668af"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.489651 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"2c6c5488e5e2dd8849ad30f9e952900c6d3bb3901eb047ea513f1d5f025751a7"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.491592 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"c7b928bf2b3f7e1db847c894c2c5c621ac31be41796fd8cc889baaa3cba16c21"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.493668 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" event={"ID":"90f0ee56-8c51-4a42-ae4e-385ff7453aa7","Type":"ContainerStarted","Data":"85e2d3553fb927925148170099f82cdfdca611b89af79ff5fcf8fe5091d8f0ef"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.496266 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"031e2c94b6730404430e4745984c61804fdbb8da3944262ac0b02b0ba5d32aaf"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.499847 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"f995c2ab0ef4c82987021f947525ce0f43184fb0566a3ea5cb3ec1e44655269a"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.499902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"376c7e111342068316b1a764195bd90aa01daf1eb6d8d2ab337bc21ec2589d46"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.500425 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.502475 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"68d8b8c6b41b123576844085d332ab2c900d485c02003ae5d7de9da583809f10"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.504380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"10c5831e1dc0b06cfffe4b21ff45c42157f053402a1e1ade4be36296023dfc50"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.507199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerStarted","Data":"d3516737a5109ef433d88489eb32b20fc6a9f40c17b89937c5af220085f560cb"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.517686 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.519370 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.572548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.577598 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.077583462 +0000 UTC m=+154.072412202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.589002 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" podStartSLOduration=130.588982553 podStartE2EDuration="2m10.588982553s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.582063746 +0000 UTC m=+153.576892486" watchObservedRunningTime="2026-01-23 14:06:46.588982553 +0000 UTC m=+153.583811283" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.637516 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bvqqf" podStartSLOduration=8.637500932 podStartE2EDuration="8.637500932s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.634372808 +0000 UTC m=+153.629201548" watchObservedRunningTime="2026-01-23 14:06:46.637500932 +0000 UTC m=+153.632329672" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.657107 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:46 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:46 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:46 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.657526 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.665756 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.681115 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.681291 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.181258428 +0000 UTC m=+154.176087168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.681488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.681819 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.181788634 +0000 UTC m=+154.176617374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.686128 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.734611 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" podStartSLOduration=130.734592591 podStartE2EDuration="2m10.734592591s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.709903574 +0000 UTC m=+153.704732324" watchObservedRunningTime="2026-01-23 14:06:46.734592591 +0000 UTC m=+153.729421331" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.782084 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.782431 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.282410459 +0000 UTC m=+154.277239199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.792430 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" podStartSLOduration=130.792410348 podStartE2EDuration="2m10.792410348s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.790075078 +0000 UTC m=+153.784903828" watchObservedRunningTime="2026-01-23 14:06:46.792410348 +0000 UTC m=+153.787239098" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.883456 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.884158 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.384138247 +0000 UTC m=+154.378967087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.894879 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" podStartSLOduration=130.894856777 podStartE2EDuration="2m10.894856777s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.864932674 +0000 UTC m=+153.859761424" watchObservedRunningTime="2026-01-23 14:06:46.894856777 +0000 UTC m=+153.889685517" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.984744 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.985085 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.485068701 +0000 UTC m=+154.479897441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.086223 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.086582 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.586567102 +0000 UTC m=+154.581395832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.125364 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.187125 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.187481 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.687463006 +0000 UTC m=+154.682291746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.284865 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.285199 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" containerID="cri-o://1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" gracePeriod=30 Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.288306 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.288593 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.788582915 +0000 UTC m=+154.783411655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.328080 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.389449 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.389584 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.889566121 +0000 UTC m=+154.884394861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.389608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.389900 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.889891611 +0000 UTC m=+154.884720351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.490666 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.490955 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.990941299 +0000 UTC m=+154.985770039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.510739 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rfbk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.510815 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podUID="d7707d7a-bfb7-4600-98f4-be607d9e77f4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.513551 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"6d5699cb0bae3a1b15b42aa9d1eddc4aa81cd5e62ea544ef8bf880646999fd08"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.513611 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"28a9405b9b29619c4d55a76d941051d6302e32cfb4060b64f3318e315e1fcc7a"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.517513 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerID="1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" exitCode=0 Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.517660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerDied","Data":"1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.593393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.593738 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.093723038 +0000 UTC m=+155.088551778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.650317 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:47 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:47 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:47 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.650384 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.695555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.695723 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.195696643 +0000 UTC m=+155.190525383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.695765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.696207 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.196199578 +0000 UTC m=+155.191028318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.800992 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.801289 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.301273816 +0000 UTC m=+155.296102556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.903045 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.903347 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.403336034 +0000 UTC m=+155.398164774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.907944 4775 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.947722 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.948689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.950703 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.956751 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.964265 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004305 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004456 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004535 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004560 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.004649 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.504621109 +0000 UTC m=+155.499449849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005158 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.005204 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.505193256 +0000 UTC m=+155.500021996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005254 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config" (OuterVolumeSpecName: "config") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005282 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.013109 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.024252 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9" (OuterVolumeSpecName: "kube-api-access-vcvf9") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "kube-api-access-vcvf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.083907 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106554 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106840 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.107128 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.60711474 +0000 UTC m=+155.601943470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107174 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107186 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107195 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107203 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107211 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107945 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139666 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.139873 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139885 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139984 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.140610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: W0123 14:06:48.161755 4775 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.161817 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.162647 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.167562 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208922 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.209254 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.70923888 +0000 UTC m=+155.704067620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.268615 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.309867 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.310078 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.81004935 +0000 UTC m=+155.804878090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310598 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.311017 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.810997739 +0000 UTC m=+155.805826479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.311043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.311133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.317056 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.317985 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.335930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.340973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411624 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411831 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411879 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.411985 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.911969784 +0000 UTC m=+155.906798524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513551 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513633 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513661 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.514017 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.014000331 +0000 UTC m=+156.008829071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.514667 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.514869 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.522778 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.524058 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.541758 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.541795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.546971 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.547990 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerDied","Data":"c804f2807463870f94ca39d16cb9e5b2566a2fdc9148b1292a1636387b79edff"} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.548047 4775 scope.go:117] "RemoveContainer" containerID="1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.587883 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"df433355476c1e3453b026f3a6326a60187145c8f5ca08e20e52c73b1cefe1da"} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.618422 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" podStartSLOduration=10.618403838999999 podStartE2EDuration="10.618403839s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:48.614718309 +0000 UTC m=+155.609547059" watchObservedRunningTime="2026-01-23 14:06:48.618403839 +0000 UTC m=+155.613232579" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622642 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622693 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.622795 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.12278049 +0000 UTC m=+156.117609230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.634877 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.635163 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.654191 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:48 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:48 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:48 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.654528 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.668009 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.671450 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725599 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.727694 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.227680752 +0000 UTC m=+156.222509492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.728100 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.728571 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.751846 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.826485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.826757 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.32672796 +0000 UTC m=+156.321556780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.882124 4775 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T14:06:47.907979993Z","Handler":null,"Name":""} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.927867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.928185 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.4281729 +0000 UTC m=+156.423001640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.028693 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.029119 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.529104134 +0000 UTC m=+156.523932874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.047926 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:49 crc kubenswrapper[4775]: W0123 14:06:49.052685 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a627ae2_fe8d_403e_9d14_3c3ace588da5.slice/crio-0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618 WatchSource:0}: Error finding container 0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618: Status 404 returned error can't find the container with id 0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.130794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.131125 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.63109907 +0000 UTC m=+156.625927820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.232300 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.232577 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.732549559 +0000 UTC m=+156.727378329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.333749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.334138 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.834122273 +0000 UTC m=+156.828951013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.367227 4775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.367296 4775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.435236 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.444379 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.470301 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.471948 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.474646 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.476237 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.482337 4775 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-285dn" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.482426 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.487190 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536714 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536885 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.538140 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.540471 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.540511 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.547704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609286 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609393 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609421 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.616187 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.620304 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632703 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632814 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerStarted","Data":"3666244710ce45438b030ced5df57918d02f4be6ca49d93c06949ae50a2a548e"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.634458 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerID="bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.634602 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerDied","Data":"bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637901 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.652269 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:49 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:49 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:49 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.652335 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.684663 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.734924 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" path="/var/lib/kubelet/pods/1f3aab1c-726d-4027-b629-e04916bc4f8b/volumes" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.735684 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 14:06:49 crc kubenswrapper[4775]: W0123 14:06:49.735675 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b219edd_2ebd_4968_b427_ec555eade68c.slice/crio-9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937 WatchSource:0}: Error finding container 9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937: Status 404 returned error can't find the container with id 9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.736107 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.776997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.782646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.797486 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.799575 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.805637 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806000 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806241 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.807129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.807603 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.808249 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.815984 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839111 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839210 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839243 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.910556 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.940979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941050 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941134 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.942290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.944471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.945331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.949492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.959789 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.986962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.024813 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85b405af_7314_4e53_93a5_252b69153561.slice/crio-b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4 WatchSource:0}: Error finding container b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4: Status 404 returned error can't find the container with id b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.096371 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.113454 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4927c747_c679_46bf_bcc6_485f87f885ab.slice/crio-c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72 WatchSource:0}: Error finding container c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72: Status 404 returned error can't find the container with id c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.116310 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.117487 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.119464 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.124692 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.135959 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.333951 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.335334 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a8470cc_442d_4efc_91a2_af7e4fe75b3a.slice/crio-971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c WatchSource:0}: Error finding container 971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c: Status 404 returned error can't find the container with id 971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349512 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349563 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.350382 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.350443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.367573 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.450207 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.520703 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.522382 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.534222 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.553369 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.554559 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.565730 4775 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mc4h4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]log ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]etcd ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/max-in-flight-filter ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 14:06:50 crc kubenswrapper[4775]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/openshift.io-startinformers ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 14:06:50 crc kubenswrapper[4775]: livez check failed Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.565777 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" podUID="f9750de6-fc79-440e-8ad4-07acbe4edb49" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605565 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605615 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605748 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605790 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.607486 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652706 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652873 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.654746 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:50 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.654861 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669227 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a" exitCode=0 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669331 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"0b8e8f2a3112c9f0a5edf42bad4d4c0988004cce6f56bf24b39ad208c83c6912"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.680628 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerStarted","Data":"314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.680667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerStarted","Data":"c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682576 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb" exitCode=0 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.687481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerStarted","Data":"f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.687508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerStarted","Data":"971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.688453 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerStarted","Data":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692192 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerStarted","Data":"b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692302 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.706346 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.706323931 podStartE2EDuration="1.706323931s" podCreationTimestamp="2026-01-23 14:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.704023422 +0000 UTC m=+157.698852162" watchObservedRunningTime="2026-01-23 14:06:50.706323931 +0000 UTC m=+157.701152671" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.730140 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.742651 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.753731 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.753842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.754251 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.755955 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.755962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.773827 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode59d5724_424f_4151_98a4_c2cfa3918ac0.slice/crio-26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614 WatchSource:0}: Error finding container 26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614: Status 404 returned error can't find the container with id 26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.778424 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podStartSLOduration=3.778407073 podStartE2EDuration="3.778407073s" podCreationTimestamp="2026-01-23 14:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.774669442 +0000 UTC m=+157.769498182" watchObservedRunningTime="2026-01-23 14:06:50.778407073 +0000 UTC m=+157.773235813" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.779253 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.845100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.852970 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" podStartSLOduration=134.852947539 podStartE2EDuration="2m14.852947539s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.847510887 +0000 UTC m=+157.842339627" watchObservedRunningTime="2026-01-23 14:06:50.852947539 +0000 UTC m=+157.847776279" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.135895 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.138522 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.142360 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.174150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.197266 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.197303 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.229130 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.238292 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.240579 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273659 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376279 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376334 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376397 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376559 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376617 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379395 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379752 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.391372 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp" (OuterVolumeSpecName: "kube-api-access-n99rp") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "kube-api-access-n99rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.391677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.399570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480645 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480689 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.503139 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559199 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:51 crc kubenswrapper[4775]: E0123 14:06:51.559496 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559667 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.560524 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.567637 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.596865 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.596935 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.605849 4775 patch_prober.go:28] interesting pod/console-f9d7485db-fgb82 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.605907 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.651947 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.667968 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:51 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:51 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:51 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.668031 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.683826 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.683870 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.684021 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.731839 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.774473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerDied","Data":"87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.774510 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785698 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.801398 4775 generic.go:334] "Generic (PLEG): container finished" podID="4927c747-c679-46bf-bcc6-485f87f885ab" containerID="314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04" exitCode=0 Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.801490 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerDied","Data":"314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.803988 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9" exitCode=0 Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.804025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.804044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerStarted","Data":"26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.811930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.811985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"3ac2cbde2ce107b51f2fd46e9adae179e9362f5a9c3e49977d3cabfab8d5c7a8"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.838671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.838954 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.839179 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.844825 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.894578 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.048746 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:52 crc kubenswrapper[4775]: W0123 14:06:52.096065 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3253a9_fac0_401c_8e02_52758dbc40f3.slice/crio-15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4 WatchSource:0}: Error finding container 15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4: Status 404 returned error can't find the container with id 15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.433477 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.651639 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:52 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:52 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:52 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.651705 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828398 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828482 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"50edf2899c3c4bd4f94febab7dade88c7fd87dc6b2dfbbaffdba8627cd2c9677"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.831402 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.831445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.832844 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.832981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.833031 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4"} Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.226507 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.226581 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.454176 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.650692 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:53 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:53 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:53 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.650766 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"4927c747-c679-46bf-bcc6-485f87f885ab\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654550 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"4927c747-c679-46bf-bcc6-485f87f885ab\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654851 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4927c747-c679-46bf-bcc6-485f87f885ab" (UID: "4927c747-c679-46bf-bcc6-485f87f885ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.680354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4927c747-c679-46bf-bcc6-485f87f885ab" (UID: "4927c747-c679-46bf-bcc6-485f87f885ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.745893 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:53 crc kubenswrapper[4775]: E0123 14:06:53.746181 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746199 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746311 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746633 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746724 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.748440 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.748610 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756283 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756351 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756364 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerDied","Data":"c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72"} Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850649 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850657 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857501 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857599 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857637 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.875750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.070889 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.313452 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:54 crc kubenswrapper[4775]: W0123 14:06:54.340959 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod266e861e_ba27_43d0_adfd_79b593bdb663.slice/crio-b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a WatchSource:0}: Error finding container b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a: Status 404 returned error can't find the container with id b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.649745 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:54 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:54 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:54 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.649895 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.857549 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerStarted","Data":"b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a"} Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.562695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.567387 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.655025 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:55 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:55 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:55 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.655081 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.678498 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:56 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:56 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:56 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.678750 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.908270 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.916993 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerStarted","Data":"3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec"} Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.957242 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.957225015 podStartE2EDuration="3.957225015s" podCreationTimestamp="2026-01-23 14:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:56.953313018 +0000 UTC m=+163.948141758" watchObservedRunningTime="2026-01-23 14:06:56.957225015 +0000 UTC m=+163.952053755" Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.648859 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:57 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:57 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:57 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.648917 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.923955 4775 generic.go:334] "Generic (PLEG): container finished" podID="266e861e-ba27-43d0-adfd-79b593bdb663" containerID="3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec" exitCode=0 Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.924014 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerDied","Data":"3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec"} Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.649770 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:58 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:58 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:58 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.649850 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.748566 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.754871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.005497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.322624 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.375524 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:06:59 crc kubenswrapper[4775]: W0123 14:06:59.386282 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ed1a97_c97e_40d0_afdf_260c475dc83f.slice/crio-90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073 WatchSource:0}: Error finding container 90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073: Status 404 returned error can't find the container with id 90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073 Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.410872 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"266e861e-ba27-43d0-adfd-79b593bdb663\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.410936 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"266e861e-ba27-43d0-adfd-79b593bdb663\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.411420 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "266e861e-ba27-43d0-adfd-79b593bdb663" (UID: "266e861e-ba27-43d0-adfd-79b593bdb663"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.421191 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "266e861e-ba27-43d0-adfd-79b593bdb663" (UID: "266e861e-ba27-43d0-adfd-79b593bdb663"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.512524 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.512559 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.649610 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:59 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:59 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:59 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.649666 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerDied","Data":"b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a"} Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018791 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018867 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.020853 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073"} Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.621102 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.650907 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:00 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:00 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:00 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.650961 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.033205 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"8bd9ffc421e594fa14511a7227054cc0cea122e754a97d6f09b8248a3fe1948a"} Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.596377 4775 patch_prober.go:28] interesting pod/console-f9d7485db-fgb82 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.596466 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.652093 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:01 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:01 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:01 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.652505 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:02 crc kubenswrapper[4775]: I0123 14:07:02.650389 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:02 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:02 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:02 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:02 crc kubenswrapper[4775]: I0123 14:07:02.650463 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:03 crc kubenswrapper[4775]: I0123 14:07:03.649829 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:07:03 crc kubenswrapper[4775]: I0123 14:07:03.653023 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.612354 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.612737 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" containerID="cri-o://f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" gracePeriod=30 Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.623738 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.623983 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" containerID="cri-o://0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" gracePeriod=30 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.100775 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerID="f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" exitCode=0 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.100869 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerDied","Data":"f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9"} Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.102609 4775 generic.go:334] "Generic (PLEG): container finished" podID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerID="0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" exitCode=0 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.102649 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerDied","Data":"0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5"} Jan 23 14:07:09 crc kubenswrapper[4775]: I0123 14:07:09.783909 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.138090 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.138165 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.571624 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.571670 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 14:07:11 crc kubenswrapper[4775]: I0123 14:07:11.660773 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:07:11 crc kubenswrapper[4775]: I0123 14:07:11.669142 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.138386 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.139199 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.570319 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.570427 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:22 crc kubenswrapper[4775]: I0123 14:07:22.074579 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:07:23 crc kubenswrapper[4775]: I0123 14:07:23.218669 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:07:23 crc kubenswrapper[4775]: I0123 14:07:23.219062 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:07:25 crc kubenswrapper[4775]: I0123 14:07:25.596446 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.293616 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.294117 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pphm8_openshift-marketplace(1a627ae2-fe8d-403e-9d14-3c3ace588da5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.295284 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122090 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:29 crc kubenswrapper[4775]: E0123 14:07:29.122407 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122428 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122564 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.123058 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.125156 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.128827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.129422 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.276899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.277142 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.397399 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.494131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.137093 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.137168 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.571000 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.571436 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.126187 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.128180 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.131423 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.134450 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.395814 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.395944 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mds26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-998gd_openshift-marketplace(a25e2625-85e2-4f61-a654-347c5d111fc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.397304 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427733 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427781 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.449779 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.463506 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.713180 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.713850 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phm66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q6l68_openshift-marketplace(e59d5724-424f-4151-98a4-c2cfa3918ac0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.715173 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" Jan 23 14:07:38 crc kubenswrapper[4775]: E0123 14:07:38.330851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" Jan 23 14:07:38 crc kubenswrapper[4775]: E0123 14:07:38.330851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.890516 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.890676 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4wm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hdhzj_openshift-marketplace(945aeb53-25e2-4666-8fbe-a12be2948454): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.891879 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.137686 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.137754 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.571991 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.572071 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.119470 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.250565 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.250738 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-stflq_openshift-marketplace(9f29362d-380a-46e7-b163-0ff42600d563): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.251905 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-stflq" podUID="9f29362d-380a-46e7-b163-0ff42600d563" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.254725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.277267 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290369 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.290737 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290748 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.290762 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290768 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290870 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290915 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.291242 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298775 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298794 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298869 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298914 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298929 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298996 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299091 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299150 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299212 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.302342 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.303245 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config" (OuterVolumeSpecName: "config") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.306862 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.306979 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config" (OuterVolumeSpecName: "config") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.307833 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.312067 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.313270 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.319027 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp" (OuterVolumeSpecName: "kube-api-access-ccjkp") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "kube-api-access-ccjkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.320425 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w" (OuterVolumeSpecName: "kube-api-access-rsv7w") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "kube-api-access-rsv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.331334 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerDied","Data":"126d7f9344248499833b2fa9bffa79374396f9b7ca1fc1c07f0f0a3674655194"} Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336692 4775 scope.go:117] "RemoveContainer" containerID="0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336781 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.342963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerDied","Data":"971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c"} Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.343066 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.363073 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.370468 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.372753 4775 scope.go:117] "RemoveContainer" containerID="f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.374727 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.379871 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399882 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399923 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401159 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401654 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401735 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401749 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401759 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401767 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401775 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401793 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401820 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401828 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.408324 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.415580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.417147 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: W0123 14:07:44.430758 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0d34b3f_ebda_4e48_82ec_36db9214c42a.slice/crio-50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37 WatchSource:0}: Error finding container 50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37: Status 404 returned error can't find the container with id 50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37 Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.613091 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.666694 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.353135 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerStarted","Data":"50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37"} Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.722044 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" path="/var/lib/kubelet/pods/4a8470cc-442d-4efc-91a2-af7e4fe75b3a/volumes" Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.723859 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" path="/var/lib/kubelet/pods/a9a77e3c-0e93-45f9-ab81-7dfbd2916588/volumes" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.976471 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.980015 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.983544 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.983949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.984621 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.984980 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.986154 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:46.989061 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.011007 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.013762 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035284 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035474 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.129701 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.129882 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2lfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2q2jj_openshift-marketplace(8bb5169a-229e-4d38-beea-4783c11d0098): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.131102 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.137418 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138740 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.139084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.139113 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.140346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.140645 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.147132 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.166704 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.327384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.943884 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.944417 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2k5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-84gx7_openshift-marketplace(0e3253a9-fac0-401c-8e02-52758dbc40f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.945670 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.056221 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.057246 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.128890 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.129526 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnxtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-285dn_openshift-marketplace(1b219edd-2ebd-4968-b427-ec555eade68c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.130726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.304189 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:50 crc kubenswrapper[4775]: W0123 14:07:50.327577 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb514f53_7687_42b7_b6bb_edc7208361d6.slice/crio-d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47 WatchSource:0}: Error finding container d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47: Status 404 returned error can't find the container with id d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47 Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.395937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerStarted","Data":"f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4"} Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.397127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerStarted","Data":"d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47"} Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.399427 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"0436249a72238537f3e2c75557b89ddfb8ecc64c7946eccac2a4926110abd43e"} Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.400877 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.616885 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-47lz2" podStartSLOduration=194.616863277 podStartE2EDuration="3m14.616863277s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:50.431970952 +0000 UTC m=+217.426799702" watchObservedRunningTime="2026-01-23 14:07:50.616863277 +0000 UTC m=+217.611692017" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.623299 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:50 crc kubenswrapper[4775]: W0123 14:07:50.627994 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303477e6_d4ac_4cbc_a088_3d7754129bd4.slice/crio-46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba WatchSource:0}: Error finding container 46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba: Status 404 returned error can't find the container with id 46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.408393 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerStarted","Data":"6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.408763 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.410281 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerID="8313d85e6f6770dd871c5a84a51890ea2ea183eff258a22019919e03772f0b12" exitCode=0 Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.410348 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerDied","Data":"8313d85e6f6770dd871c5a84a51890ea2ea183eff258a22019919e03772f0b12"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.412006 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.417132 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerStarted","Data":"0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.421387 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.428900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerStarted","Data":"75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.428979 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerStarted","Data":"46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.429007 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.437880 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.442653 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podStartSLOduration=25.44263853 podStartE2EDuration="25.44263853s" podCreationTimestamp="2026-01-23 14:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.427172927 +0000 UTC m=+218.422001667" watchObservedRunningTime="2026-01-23 14:07:51.44263853 +0000 UTC m=+218.437467270" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.470330 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.470315294 podStartE2EDuration="17.470315294s" podCreationTimestamp="2026-01-23 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.469463017 +0000 UTC m=+218.464291797" watchObservedRunningTime="2026-01-23 14:07:51.470315294 +0000 UTC m=+218.465144034" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.502216 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" podStartSLOduration=25.50219874 podStartE2EDuration="25.50219874s" podCreationTimestamp="2026-01-23 14:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.500184527 +0000 UTC m=+218.495013287" watchObservedRunningTime="2026-01-23 14:07:51.50219874 +0000 UTC m=+218.497027480" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.434497 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca" exitCode=0 Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.434570 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.435730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.438310 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6" exitCode=0 Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.438371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.459307 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pphm8" podStartSLOduration=2.054023057 podStartE2EDuration="1m4.459288375s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:49.620044681 +0000 UTC m=+156.614873422" lastFinishedPulling="2026-01-23 14:07:52.02531 +0000 UTC m=+219.020138740" observedRunningTime="2026-01-23 14:07:52.456286001 +0000 UTC m=+219.451114741" watchObservedRunningTime="2026-01-23 14:07:52.459288375 +0000 UTC m=+219.454117115" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.717617 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826258 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826303 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826484 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" (UID: "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.827008 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.833343 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" (UID: "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.928485 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219164 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219229 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219273 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219851 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219954 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" gracePeriod=600 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.445633 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.445657 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerDied","Data":"f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.446136 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.447908 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" exitCode=0 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.447991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.450722 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerStarted","Data":"706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.453366 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd" exitCode=0 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.455148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.480909 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q6l68" podStartSLOduration=2.370418032 podStartE2EDuration="1m3.480882785s" podCreationTimestamp="2026-01-23 14:06:50 +0000 UTC" firstStartedPulling="2026-01-23 14:06:51.805542788 +0000 UTC m=+158.800371528" lastFinishedPulling="2026-01-23 14:07:52.916007541 +0000 UTC m=+219.910836281" observedRunningTime="2026-01-23 14:07:53.472144282 +0000 UTC m=+220.466973032" watchObservedRunningTime="2026-01-23 14:07:53.480882785 +0000 UTC m=+220.475711535" Jan 23 14:07:54 crc kubenswrapper[4775]: I0123 14:07:54.460199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} Jan 23 14:07:54 crc kubenswrapper[4775]: I0123 14:07:54.461817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e"} Jan 23 14:07:55 crc kubenswrapper[4775]: I0123 14:07:55.501494 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-998gd" podStartSLOduration=4.395167005 podStartE2EDuration="1m5.501467678s" podCreationTimestamp="2026-01-23 14:06:50 +0000 UTC" firstStartedPulling="2026-01-23 14:06:52.833262909 +0000 UTC m=+159.828091649" lastFinishedPulling="2026-01-23 14:07:53.939563582 +0000 UTC m=+220.934392322" observedRunningTime="2026-01-23 14:07:55.495838542 +0000 UTC m=+222.490667292" watchObservedRunningTime="2026-01-23 14:07:55.501467678 +0000 UTC m=+222.496296458" Jan 23 14:07:58 crc kubenswrapper[4775]: I0123 14:07:58.636790 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:07:58 crc kubenswrapper[4775]: I0123 14:07:58.637578 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.242420 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.308025 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.450538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.450599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.487429 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.519936 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7"} Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.522516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d"} Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.565501 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.617416 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.846544 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.846581 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.900905 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.529259 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7" exitCode=0 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.529456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7"} Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.533085 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d" exitCode=0 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.534246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d"} Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.535377 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" containerID="cri-o://52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" gracePeriod=2 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.962355 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.546205 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" exitCode=0 Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.546351 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73"} Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.820063 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.554636 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" containerID="cri-o://b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" gracePeriod=2 Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.859029 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.985786 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986037 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986173 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986664 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities" (OuterVolumeSpecName: "utilities") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986814 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.995074 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv" (OuterVolumeSpecName: "kube-api-access-dhknv") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "kube-api-access-dhknv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.087861 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.211044 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.290317 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561515 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561837 4775 scope.go:117] "RemoveContainer" containerID="52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.570138 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" exitCode=0 Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.570232 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.579327 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.582641 4775 scope.go:117] "RemoveContainer" containerID="2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.590821 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.592642 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.613554 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hdhzj" podStartSLOduration=3.314178208 podStartE2EDuration="1m16.613535306s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:50.676337505 +0000 UTC m=+157.671166255" lastFinishedPulling="2026-01-23 14:08:03.975694613 +0000 UTC m=+230.970523353" observedRunningTime="2026-01-23 14:08:04.612348709 +0000 UTC m=+231.607177479" watchObservedRunningTime="2026-01-23 14:08:04.613535306 +0000 UTC m=+231.608364046" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.622180 4775 scope.go:117] "RemoveContainer" containerID="cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.803235 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000005 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000706 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.003186 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities" (OuterVolumeSpecName: "utilities") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.016455 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26" (OuterVolumeSpecName: "kube-api-access-mds26") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "kube-api-access-mds26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.031410 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101699 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101745 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101764 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.586817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"3ac2cbde2ce107b51f2fd46e9adae179e9362f5a9c3e49977d3cabfab8d5c7a8"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.587090 4775 scope.go:117] "RemoveContainer" containerID="b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.587208 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.595326 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809" exitCode=0 Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.595397 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.600977 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.614444 4775 scope.go:117] "RemoveContainer" containerID="6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.632564 4775 scope.go:117] "RemoveContainer" containerID="ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.642194 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.646691 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.726542 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" path="/var/lib/kubelet/pods/1a627ae2-fe8d-403e-9d14-3c3ace588da5/volumes" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.727331 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" path="/var/lib/kubelet/pods/a25e2625-85e2-4f61-a654-347c5d111fc2/volumes" Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.608368 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerStarted","Data":"c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.612105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.614350 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.628580 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q2jj" podStartSLOduration=2.842502438 podStartE2EDuration="1m19.628554986s" podCreationTimestamp="2026-01-23 14:06:47 +0000 UTC" firstStartedPulling="2026-01-23 14:06:49.636568005 +0000 UTC m=+156.631396745" lastFinishedPulling="2026-01-23 14:08:06.422620563 +0000 UTC m=+233.417449293" observedRunningTime="2026-01-23 14:08:06.626893684 +0000 UTC m=+233.621722424" watchObservedRunningTime="2026-01-23 14:08:06.628554986 +0000 UTC m=+233.623383726" Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.629930 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stflq" podStartSLOduration=5.057721722 podStartE2EDuration="1m15.629922938s" podCreationTimestamp="2026-01-23 14:06:51 +0000 UTC" firstStartedPulling="2026-01-23 14:06:53.855046152 +0000 UTC m=+160.849874892" lastFinishedPulling="2026-01-23 14:08:04.427247368 +0000 UTC m=+231.422076108" observedRunningTime="2026-01-23 14:08:05.65705204 +0000 UTC m=+232.651880790" watchObservedRunningTime="2026-01-23 14:08:06.629922938 +0000 UTC m=+233.624751678" Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.632209 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8" exitCode=0 Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.632247 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8"} Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.635709 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5" exitCode=0 Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.635744 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.269422 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.269820 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.643609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.647378 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.677331 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84gx7" podStartSLOduration=2.37447291 podStartE2EDuration="1m17.67731126s" podCreationTimestamp="2026-01-23 14:06:51 +0000 UTC" firstStartedPulling="2026-01-23 14:06:52.836708282 +0000 UTC m=+159.831537022" lastFinishedPulling="2026-01-23 14:08:08.139546612 +0000 UTC m=+235.134375372" observedRunningTime="2026-01-23 14:08:08.67699112 +0000 UTC m=+235.671819880" watchObservedRunningTime="2026-01-23 14:08:08.67731126 +0000 UTC m=+235.672140000" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.704268 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-285dn" podStartSLOduration=3.323133908 podStartE2EDuration="1m20.704248851s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:50.683962443 +0000 UTC m=+157.678791183" lastFinishedPulling="2026-01-23 14:08:08.065077386 +0000 UTC m=+235.059906126" observedRunningTime="2026-01-23 14:08:08.70261424 +0000 UTC m=+235.697442980" watchObservedRunningTime="2026-01-23 14:08:08.704248851 +0000 UTC m=+235.699077591" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.780830 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.310976 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:09 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:09 crc kubenswrapper[4775]: > Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.483205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.483335 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.548220 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.548586 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.590398 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.690018 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:10 crc kubenswrapper[4775]: I0123 14:08:10.539997 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:10 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:10 crc kubenswrapper[4775]: > Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.503848 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.503893 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.895432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.895485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.938186 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:12 crc kubenswrapper[4775]: I0123 14:08:12.547694 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:12 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:12 crc kubenswrapper[4775]: > Jan 23 14:08:12 crc kubenswrapper[4775]: I0123 14:08:12.713910 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:13 crc kubenswrapper[4775]: I0123 14:08:13.215139 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:13 crc kubenswrapper[4775]: I0123 14:08:13.215682 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" containerID="cri-o://cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" gracePeriod=2 Jan 23 14:08:15 crc kubenswrapper[4775]: I0123 14:08:15.012629 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:15 crc kubenswrapper[4775]: I0123 14:08:15.012858 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stflq" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" containerID="cri-o://cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" gracePeriod=2 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.716673 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.717994 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" exitCode=137 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.720284 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" exitCode=0 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.726829 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742"} Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.726870 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35"} Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.839868 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981411 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981476 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.982747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities" (OuterVolumeSpecName: "utilities") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.993162 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7" (OuterVolumeSpecName: "kube-api-access-w4wm7") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "kube-api-access-w4wm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.044710 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.066508 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.067164 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084879 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084974 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085139 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085151 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085161 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.086538 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities" (OuterVolumeSpecName: "utilities") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.120640 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc" (OuterVolumeSpecName: "kube-api-access-nj6gc") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "kube-api-access-nj6gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.185932 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.186242 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.197289 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.287350 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.336279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.390304 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"0b8e8f2a3112c9f0a5edf42bad4d4c0988004cce6f56bf24b39ad208c83c6912"} Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728703 4775 scope.go:117] "RemoveContainer" containerID="cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728574 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.730963 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.732858 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.732922 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"50edf2899c3c4bd4f94febab7dade88c7fd87dc6b2dfbbaffdba8627cd2c9677"} Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.744424 4775 scope.go:117] "RemoveContainer" containerID="2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.760423 4775 scope.go:117] "RemoveContainer" containerID="6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.768340 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.771021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.786186 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.786296 4775 scope.go:117] "RemoveContainer" containerID="cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.791083 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.806651 4775 scope.go:117] "RemoveContainer" containerID="183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.827610 4775 scope.go:117] "RemoveContainer" containerID="8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.552655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.593833 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.721230 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" path="/var/lib/kubelet/pods/945aeb53-25e2-4666-8fbe-a12be2948454/volumes" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.721793 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f29362d-380a-46e7-b163-0ff42600d563" path="/var/lib/kubelet/pods/9f29362d-380a-46e7-b163-0ff42600d563/volumes" Jan 23 14:08:21 crc kubenswrapper[4775]: I0123 14:08:21.543597 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:21 crc kubenswrapper[4775]: I0123 14:08:21.581763 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.545075 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.545614 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" containerID="cri-o://6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" gracePeriod=30 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.650381 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.650711 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" containerID="cri-o://75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" gracePeriod=30 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.777794 4775 generic.go:334] "Generic (PLEG): container finished" podID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerID="75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" exitCode=0 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.778009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerDied","Data":"75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd"} Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.780166 4775 generic.go:334] "Generic (PLEG): container finished" podID="db514f53-7687-42b7-b6bb-edc7208361d6" containerID="6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" exitCode=0 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.780218 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerDied","Data":"6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.093889 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124732 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124920 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124958 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.126137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca" (OuterVolumeSpecName: "client-ca") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.126244 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config" (OuterVolumeSpecName: "config") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.131080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw" (OuterVolumeSpecName: "kube-api-access-l8kcw") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "kube-api-access-l8kcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.131236 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226956 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226989 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226998 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.227007 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.413339 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431118 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431271 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431356 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.432251 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config" (OuterVolumeSpecName: "config") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.432315 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.433015 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.434930 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.439959 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t" (OuterVolumeSpecName: "kube-api-access-sf89t") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "kube-api-access-sf89t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532641 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532673 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532681 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532692 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532702 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerDied","Data":"46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785775 4775 scope.go:117] "RemoveContainer" containerID="75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785891 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.789770 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.789772 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerDied","Data":"d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.809126 4775 scope.go:117] "RemoveContainer" containerID="6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.812918 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.815396 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.823565 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.827507 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.995870 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996517 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996546 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996591 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996579 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996612 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998333 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998582 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998607 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998617 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998627 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998640 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998650 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998660 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998667 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998677 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998685 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998697 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998704 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998711 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998719 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998727 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998734 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998744 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998751 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998761 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998770 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998783 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998791 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998821 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998840 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998847 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998859 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998867 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998877 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998885 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998899 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998909 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998918 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998925 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998935 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998943 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998951 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998959 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998969 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998977 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998992 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999000 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999102 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999111 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999120 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999129 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999140 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999150 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999162 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999175 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999183 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999193 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999207 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999217 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:27.999319 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999329 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999437 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.000353 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.000811 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.005989 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.012948 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.013917 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016666 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016946 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016671 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.017675 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.018120 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.018364 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.025901 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.026776 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.036851 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.040570 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.043252 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.044117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046099 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046124 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046157 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046464 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046523 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046580 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046610 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046633 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046676 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147342 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147389 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147416 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147447 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147470 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147544 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147570 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147596 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147713 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147866 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147883 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147943 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148116 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148462 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148534 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:28.648516737 +0000 UTC m=+255.643345567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148828 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.149662 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.162642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.328584 4775 patch_prober.go:28] interesting pod/controller-manager-7fc4d79794-zptsb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.328658 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.655464 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.656291 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.656423 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.656403601 +0000 UTC m=+256.651232341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.797446 4775 generic.go:334] "Generic (PLEG): container finished" podID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerID="0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.797555 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerDied","Data":"0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135"} Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.800386 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.801835 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802450 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802481 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802491 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802501 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" exitCode=2 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802579 4775 scope.go:117] "RemoveContainer" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148553 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148608 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148662 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648636816 +0000 UTC m=+256.643465556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148706 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648682137 +0000 UTC m=+256.643510898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148727 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148861 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648844703 +0000 UTC m=+256.643673523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148739 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148776 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148953 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648941336 +0000 UTC m=+256.643770196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.670960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671022 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671112 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.671784 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.671870 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.671849239 +0000 UTC m=+258.666677969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.722363 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" path="/var/lib/kubelet/pods/303477e6-d4ac-4cbc-a088-3d7754129bd4/volumes" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.723388 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" path="/var/lib/kubelet/pods/db514f53-7687-42b7-b6bb-edc7208361d6/volumes" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.790149 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.812999 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152070 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152395 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152470 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:30.652451031 +0000 UTC m=+257.647279771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.224266 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.279956 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280056 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280624 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280702 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock" (OuterVolumeSpecName: "var-lock") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.293221 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.351667 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.352605 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381570 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381590 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381658 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381690 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381782 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381949 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381961 4775 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381969 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381977 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381985 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381994 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672052 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672785 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.672754892 +0000 UTC m=+258.667583672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672089 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672109 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672127 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673217 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.673198866 +0000 UTC m=+258.668027646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673370 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.67334492 +0000 UTC m=+258.668173700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673408 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.673394352 +0000 UTC m=+258.668223132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.686453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.794504 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.795096 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.795731 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.796124 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.796530 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.796705 4775 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.797270 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.822436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823395 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" exitCode=0 Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823475 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823502 4775 scope.go:117] "RemoveContainer" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerDied","Data":"50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37"} Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826122 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826209 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.854211 4775 scope.go:117] "RemoveContainer" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.876447 4775 scope.go:117] "RemoveContainer" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.896908 4775 scope.go:117] "RemoveContainer" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.919878 4775 scope.go:117] "RemoveContainer" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.940850 4775 scope.go:117] "RemoveContainer" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.961704 4775 scope.go:117] "RemoveContainer" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.962209 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": container with ID starting with 0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c not found: ID does not exist" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.962255 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c"} err="failed to get container status \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": rpc error: code = NotFound desc = could not find container \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": container with ID starting with 0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.962292 4775 scope.go:117] "RemoveContainer" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.963239 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": container with ID starting with a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792 not found: ID does not exist" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.963267 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792"} err="failed to get container status \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": rpc error: code = NotFound desc = could not find container \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": container with ID starting with a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.963281 4775 scope.go:117] "RemoveContainer" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.964331 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": container with ID starting with 84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690 not found: ID does not exist" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964356 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690"} err="failed to get container status \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": rpc error: code = NotFound desc = could not find container \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": container with ID starting with 84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964405 4775 scope.go:117] "RemoveContainer" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.964914 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": container with ID starting with cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185 not found: ID does not exist" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964937 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185"} err="failed to get container status \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": rpc error: code = NotFound desc = could not find container \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": container with ID starting with cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964952 4775 scope.go:117] "RemoveContainer" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.965553 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": container with ID starting with 11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca not found: ID does not exist" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.965587 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca"} err="failed to get container status \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": rpc error: code = NotFound desc = could not find container \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": container with ID starting with 11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.965604 4775 scope.go:117] "RemoveContainer" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.965990 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": container with ID starting with 039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53 not found: ID does not exist" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.966022 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53"} err="failed to get container status \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": rpc error: code = NotFound desc = could not find container \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": container with ID starting with 039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.998514 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.399866 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.687938 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699211 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699258 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699321 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.699645 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.699698 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:35.699685348 +0000 UTC m=+262.694514088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.719623 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.200758 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688375 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688435 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688547 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:33.688518725 +0000 UTC m=+260.683347495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700110 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700160 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700164 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700191 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700171819 +0000 UTC m=+261.695000589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700124 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700237 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700217681 +0000 UTC m=+261.695046421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700252 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700246562 +0000 UTC m=+261.695075302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700266 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700258282 +0000 UTC m=+261.695087022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.029546 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.029682 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041504 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041715 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041528 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.041743 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042027 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041711 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042256 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041711 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042336 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041531 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042376 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042116 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.047735 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.048424 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.049124 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.058359 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.058982 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.718092 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.719095 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.724836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.802500 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.803023 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" containerID="cri-o://b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" gracePeriod=15 Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.847719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b"} Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.847884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6f7183e40e07981a30306e2dbf34ad8d9e3471d4eb8c3a38fc292f3ddd0da04b"} Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.849201 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.849416 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.849670 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.885626 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.885684 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.038247 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.038349 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.045104 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.045180 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.051097 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.051281 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.054771 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.054861 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.191391 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.192175 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.192514 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.194014 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.235225 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.235277 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332130 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332233 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332264 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332295 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332339 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332372 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332412 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332436 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332490 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332537 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332832 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.333022 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334216 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334297 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334525 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334895 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339488 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2" (OuterVolumeSpecName: "kube-api-access-p6js2") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "kube-api-access-p6js2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339506 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339957 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340242 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340421 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340653 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.341118 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.341245 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433877 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433923 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433938 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433953 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433965 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433977 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433992 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434005 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434017 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434030 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434042 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434055 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434068 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.481976 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.482042 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.727696 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770664 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.855732 4775 generic.go:334] "Generic (PLEG): container finished" podID="3066d31d-92a4-45a7-b368-ba66d5689456" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" exitCode=0 Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerDied","Data":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerDied","Data":"74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535"} Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856477 4775 scope.go:117] "RemoveContainer" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856747 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.858229 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.858729 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.859533 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.884840 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.885612 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.886274 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.890516 4775 scope.go:117] "RemoveContainer" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.891257 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": container with ID starting with b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f not found: ID does not exist" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.891309 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} err="failed to get container status \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": rpc error: code = NotFound desc = could not find container \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": container with ID starting with b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f not found: ID does not exist" Jan 23 14:08:35 crc kubenswrapper[4775]: W0123 14:08:35.724882 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.726468 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728147 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728208 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728322 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:37.728286132 +0000 UTC m=+264.723114912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.771889 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.771937 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772006 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772095 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772036 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772006124 +0000 UTC m=+266.766834904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772199 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772155119 +0000 UTC m=+266.766983919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772257 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772236562 +0000 UTC m=+266.767065392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772299 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772285543 +0000 UTC m=+266.767114323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: I0123 14:08:35.784644 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.785762 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.785956 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:43.785925805 +0000 UTC m=+270.780754585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.335257 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.335365 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.665949 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.666092 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.743883 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.744045 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.789348 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.789655 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.858269 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.858389 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:37 crc kubenswrapper[4775]: E0123 14:08:37.004761 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Jan 23 14:08:37 crc kubenswrapper[4775]: W0123 14:08:37.255223 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:37 crc kubenswrapper[4775]: E0123 14:08:37.255359 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:37 crc kubenswrapper[4775]: I0123 14:08:37.732978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:38 crc kubenswrapper[4775]: E0123 14:08:38.734561 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735150 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735195 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735285 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:43.735260871 +0000 UTC m=+270.730089611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.790887 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.859461 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.859781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.860013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.860171 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.344775 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.344893 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.601025 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.601455 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.761750 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.761933 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.859955 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860098 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860066751 +0000 UTC m=+275.854895531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860224 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860253 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860274 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860261577 +0000 UTC m=+275.855090317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860296 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860283217 +0000 UTC m=+275.855111987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860985 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.861039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.86102671 +0000 UTC m=+275.855855450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.869749 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.869909 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.078758 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.078896 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.572297 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.572416 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.852944 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.853070 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.713774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.714768 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.715375 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.715744 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.736435 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.736486 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:42 crc kubenswrapper[4775]: E0123 14:08:42.737140 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.737701 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.905989 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1189ac241392d12ae197de28172c1eb38c5e4b0c799568b801ea1bd502836315"} Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908861 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908909 4775 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212" exitCode=1 Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908938 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212"} Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.909380 4775 scope.go:117] "RemoveContainer" containerID="0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.909729 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910221 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910621 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910887 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.406205 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="7s" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.717853 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.718650 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.719369 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.719843 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.720223 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.811616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.811693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.812507 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.812602 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:59.812580097 +0000 UTC m=+286.807408877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.922960 4775 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="66779cb0bfcfce756fbae36ed1bca9e0efea301f100ab3fd85127a0ec86aa8d5" exitCode=0 Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.923071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"66779cb0bfcfce756fbae36ed1bca9e0efea301f100ab3fd85127a0ec86aa8d5"} Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924219 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924261 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924564 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.925260 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.925280 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.926111 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.926610 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.927292 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.928170 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.928262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b821d7f72df334480c68b0f88ce737d26860c6c898f513dd696f37f929188b3"} Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929149 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929417 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929860 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.930543 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.930828 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:44 crc kubenswrapper[4775]: E0123 14:08:44.813641 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945603 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"675bf5e9c2285f4e55481cd8056521b35be83ea39288e07aff2fd527fe10f7a1"} Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945666 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"586bdbeab6277db5db805c44f7257af5857c0dd7442ba238cc0d7d596fa68408"} Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"944376e72a42cc14e53fd0437f6c53ef4e33d4f1a54304b9cc93f1759403fb1d"} Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814257 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814565 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814877 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:53.814701668 +0000 UTC m=+280.809530418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.953860 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a976a0c3ce6f4391d8547e6d1bc358159da2b23c5b78cfe8cc79035713150b99"} Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.954863 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9154c049e861c18ffcf33cb9af0054b67e678116f8c03aa8cd12ae8d5332a838"} Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.955029 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.954303 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.955231 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.020751 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.025964 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.738634 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.739063 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.744409 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.971146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.847211 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876504 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876645 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.886666 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877056 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877857 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.877796734 +0000 UTC m=+292.872625494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877935 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877992 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.877978469 +0000 UTC m=+292.872807229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877987 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.878092 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.878067672 +0000 UTC m=+292.872896412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.274415 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.685305 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.968361 4775 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.009608 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.407833 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.991227 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.991521 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.995300 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.996029 4775 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://944376e72a42cc14e53fd0437f6c53ef4e33d4f1a54304b9cc93f1759403fb1d" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.996054 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:52 crc kubenswrapper[4775]: I0123 14:08:52.010890 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.000438 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.000511 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.005630 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.348889 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.841764 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.877581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:54 crc kubenswrapper[4775]: I0123 14:08:54.269304 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.826560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.852197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.884970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:00 crc kubenswrapper[4775]: W0123 14:09:00.094937 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5caa98_bd54_485f_a11e_46a25c98f82f.slice/crio-b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a WatchSource:0}: Error finding container b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a: Status 404 returned error can't find the container with id b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a Jan 23 14:09:00 crc kubenswrapper[4775]: I0123 14:09:00.616010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:09:00 crc kubenswrapper[4775]: I0123 14:09:00.974502 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerStarted","Data":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerStarted","Data":"b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a"} Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056648 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.231665 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.581914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.712165 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.746375 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.873792 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.011445 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.033376 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.056162 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.056312 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.185960 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.256563 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.394465 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.481392 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.516686 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.747985 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.974360 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.061542 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.061643 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.210997 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.409895 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.562025 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.720122 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.870010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.870186 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.990386 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.060605 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.088779 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.101723 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.152148 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.221109 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.230511 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.255722 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.297510 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.381231 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.402764 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.412253 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.414184 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.439675 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.469886 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.544463 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.678519 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.694243 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.698073 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.709324 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.751595 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.779694 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.798125 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.808267 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.856369 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.924058 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.930726 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.971892 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.994988 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.031981 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.082998 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.110114 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.201314 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.218844 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.287947 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.302636 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.311653 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.345912 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.419538 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.420300 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.476849 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.570674 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.607308 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.796434 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.812312 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.813069 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.831982 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.941138 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.941384 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.942133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.992013 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.010626 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.074126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.082321 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.101177 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.164165 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.197044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.201670 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.244500 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.262877 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.317449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.345355 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.366958 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.369540 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.412263 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.416827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.457084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.495167 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.524048 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.554735 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.609989 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.766852 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.772246 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.831225 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.848956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.870411 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.970235 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.004856 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.064889 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090258 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerStarted","Data":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090306 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerStarted","Data":"51613da25ddd6eb38c4ee47a22e6af21766feb4517f1baec6950dd90deefa0e9"} Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090537 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.096599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.184987 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.199642 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.204868 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.299550 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.563624 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.826886 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.850142 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.862144 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.054603 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.126989 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.189591 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.194906 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.198595 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.362278 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.532343 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.588284 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.636768 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.711123 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.781047 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.904172 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.921400 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.922788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.030491 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.066960 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.127678 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.158797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.283217 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.304587 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.340558 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.405612 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.489914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.571601 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.588348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.619659 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.664027 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.722757 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.784338 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.814364 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.826797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.828379 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.834900 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.926383 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.940837 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.947426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.948388 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.969924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.141485 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.235469 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.302029 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.318847 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.326391 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.435983 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.478496 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.607304 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.684623 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.701391 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.863112 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.885876 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.885936 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.914139 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.987449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.011094 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.051228 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.240056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.281917 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.370389 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.377961 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.383211 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.406111 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.436343 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.443343 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.644292 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.678873 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.748616 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.751851 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.752749 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.807785 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.836561 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.870061 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.941291 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.956475 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.170558 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.217236 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.368156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.419022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.445956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.446925 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.452458 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.536107 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.743203 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.811349 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.921142 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.126530 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.206760 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.233019 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.236615 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.277247 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.367413 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.468191 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.479011 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.487394 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.520018 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.523026 4775 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.546708 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.698775 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.729976 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.751928 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.859996 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.873321 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.876643 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podStartSLOduration=47.876558575 podStartE2EDuration="47.876558575s" podCreationTimestamp="2026-01-23 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:01.078106553 +0000 UTC m=+288.072935303" watchObservedRunningTime="2026-01-23 14:09:13.876558575 +0000 UTC m=+300.871387345" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.877346 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" podStartSLOduration=47.877333849 podStartE2EDuration="47.877333849s" podCreationTimestamp="2026-01-23 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:07.108377679 +0000 UTC m=+294.103206479" watchObservedRunningTime="2026-01-23 14:09:13.877333849 +0000 UTC m=+300.872162619" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881034 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881108 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f866778cb-dv6wd","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:09:13 crc kubenswrapper[4775]: E0123 14:09:13.881368 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881395 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: E0123 14:09:13.881412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881426 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881569 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881595 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881599 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881618 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.882085 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq","openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.882362 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.885545 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887076 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887222 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887447 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887082 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.888151 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.888787 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.891152 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.891981 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892535 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892630 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892650 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892764 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.894122 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.894543 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.905961 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.906467 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.908259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.915250 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.952273 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.952242996 podStartE2EDuration="23.952242996s" podCreationTimestamp="2026-01-23 14:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:13.941396311 +0000 UTC m=+300.936225051" watchObservedRunningTime="2026-01-23 14:09:13.952242996 +0000 UTC m=+300.947071766" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.992292 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.033781 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042352 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042424 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042444 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042481 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042498 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042705 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042792 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042836 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042927 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042955 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144273 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144372 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144431 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144474 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144514 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144652 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144869 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144919 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145512 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145689 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.146267 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.147690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.150996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151423 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151428 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.152068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.152497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.153426 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.161010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.172793 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.207965 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.513176 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.626277 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.654648 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f866778cb-dv6wd"] Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.971158 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.113174 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.140748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" event={"ID":"e9989b1f-b602-41d4-b2be-9db936737e34","Type":"ContainerStarted","Data":"48f29e602a852e0ea0d277991522d8fa604cb0c43d918086a467b47c29d09db7"} Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.364924 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.375360 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.431837 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.472725 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.724401 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" path="/var/lib/kubelet/pods/3066d31d-92a4-45a7-b368-ba66d5689456/volumes" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.776584 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.911079 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.047164 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.091898 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.121767 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.149792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" event={"ID":"e9989b1f-b602-41d4-b2be-9db936737e34","Type":"ContainerStarted","Data":"8d30f5526d5ea1b5e6adb26cb88f4bfb1e261e90a48b788817ed1f9806d76525"} Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.150176 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.157533 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.186115 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" podStartSLOduration=68.186084178 podStartE2EDuration="1m8.186084178s" podCreationTimestamp="2026-01-23 14:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:16.181412494 +0000 UTC m=+303.176241274" watchObservedRunningTime="2026-01-23 14:09:16.186084178 +0000 UTC m=+303.180912928" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.407997 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.460563 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.483965 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.545036 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.561437 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.595604 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.623637 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 14:09:17 crc kubenswrapper[4775]: I0123 14:09:17.054707 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 14:09:24 crc kubenswrapper[4775]: I0123 14:09:24.960965 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 14:09:24 crc kubenswrapper[4775]: I0123 14:09:24.962550 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" gracePeriod=5 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.577997 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.578875 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" containerID="cri-o://653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" gracePeriod=30 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.668978 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.669240 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" containerID="cri-o://43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" gracePeriod=30 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.945650 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.010475 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019413 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019480 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019534 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019568 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019590 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019606 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019625 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.020980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config" (OuterVolumeSpecName: "config") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.021789 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.021937 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.023086 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.023964 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config" (OuterVolumeSpecName: "config") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.025776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.025867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4" (OuterVolumeSpecName: "kube-api-access-2wfb4") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "kube-api-access-2wfb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.026040 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr" (OuterVolumeSpecName: "kube-api-access-vr2rr") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "kube-api-access-vr2rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.026971 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120721 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120749 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120758 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120768 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120776 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120784 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120792 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120816 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120824 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231394 4775 generic.go:334] "Generic (PLEG): container finished" podID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" exitCode=0 Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerDied","Data":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerDied","Data":"51613da25ddd6eb38c4ee47a22e6af21766feb4517f1baec6950dd90deefa0e9"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231597 4775 scope.go:117] "RemoveContainer" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231854 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236243 4775 generic.go:334] "Generic (PLEG): container finished" podID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" exitCode=0 Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236270 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerDied","Data":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236285 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerDied","Data":"b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236316 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.264988 4775 scope.go:117] "RemoveContainer" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: E0123 14:09:27.266065 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": container with ID starting with 653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84 not found: ID does not exist" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.266094 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} err="failed to get container status \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": rpc error: code = NotFound desc = could not find container \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": container with ID starting with 653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84 not found: ID does not exist" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.266112 4775 scope.go:117] "RemoveContainer" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.270433 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.274784 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.284885 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.290566 4775 scope.go:117] "RemoveContainer" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.292099 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:27 crc kubenswrapper[4775]: E0123 14:09:27.292208 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": container with ID starting with 43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569 not found: ID does not exist" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.292271 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} err="failed to get container status \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": rpc error: code = NotFound desc = could not find container \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": container with ID starting with 43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569 not found: ID does not exist" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.719590 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" path="/var/lib/kubelet/pods/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83/volumes" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.720384 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" path="/var/lib/kubelet/pods/ff5caa98-bd54-485f-a11e-46a25c98f82f/volumes" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053051 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053394 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053419 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053432 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053442 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053489 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053499 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053649 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053663 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053696 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.054048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058029 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058413 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058586 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058601 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058621 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058985 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.065161 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.078750 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.083099 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.086018 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.086788 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.088059 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.088208 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.090276 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.090695 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.091175 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.108515 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235385 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235458 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235495 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235542 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235600 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235626 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.302696 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.338180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.338886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339130 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339389 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339830 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340938 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340975 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.341030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.341678 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.342323 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.342775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.345045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.350949 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.357143 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.365320 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.381422 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.412727 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.594737 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: W0123 14:09:28.607932 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd0778a_d1f2_417f_acc4_e6cb92c96f45.slice/crio-9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c WatchSource:0}: Error finding container 9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c: Status 404 returned error can't find the container with id 9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.657166 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.250641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerStarted","Data":"07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.251082 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerStarted","Data":"b2254ed186345bce63999097f48f317e358a300d29e3faa324494ca3b29d1a75"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.251125 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252483 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerStarted","Data":"17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252524 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerStarted","Data":"9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252820 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.257342 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.277996 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" podStartSLOduration=3.277788993 podStartE2EDuration="3.277788993s" podCreationTimestamp="2026-01-23 14:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:29.272995045 +0000 UTC m=+316.267823785" watchObservedRunningTime="2026-01-23 14:09:29.277788993 +0000 UTC m=+316.272617753" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.303935 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" podStartSLOduration=3.303919671 podStartE2EDuration="3.303919671s" podCreationTimestamp="2026-01-23 14:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:29.301915219 +0000 UTC m=+316.296743969" watchObservedRunningTime="2026-01-23 14:09:29.303919671 +0000 UTC m=+316.298748431" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.337498 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.119780 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.119886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261420 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261519 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" exitCode=137 Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261641 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261773 4775 scope.go:117] "RemoveContainer" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262281 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262889 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262930 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262951 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263117 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263138 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263267 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263445 4775 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263486 4775 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263511 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.276595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.341211 4775 scope.go:117] "RemoveContainer" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: E0123 14:09:30.341732 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": container with ID starting with aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b not found: ID does not exist" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.341793 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b"} err="failed to get container status \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": rpc error: code = NotFound desc = could not find container \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": container with ID starting with aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b not found: ID does not exist" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.364576 4775 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:31 crc kubenswrapper[4775]: I0123 14:09:31.720138 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 14:09:35 crc kubenswrapper[4775]: I0123 14:09:35.213123 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.562075 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.562582 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" containerID="cri-o://17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" gracePeriod=30 Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.569735 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.569948 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" containerID="cri-o://07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" gracePeriod=30 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.393285 4775 generic.go:334] "Generic (PLEG): container finished" podID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerID="17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" exitCode=0 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.393656 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerDied","Data":"17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6"} Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.399091 4775 generic.go:334] "Generic (PLEG): container finished" podID="3d30dd02-24bf-444b-bf37-a01716591d49" containerID="07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" exitCode=0 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.399392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerDied","Data":"07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b"} Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.603097 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.607450 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627357 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:47 crc kubenswrapper[4775]: E0123 14:09:47.627557 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627568 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: E0123 14:09:47.627582 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627588 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627671 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.628050 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.636311 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801131 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801190 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801219 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801242 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801275 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801314 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801503 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801554 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802387 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca" (OuterVolumeSpecName: "client-ca") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802469 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config" (OuterVolumeSpecName: "config") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802567 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config" (OuterVolumeSpecName: "config") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.807634 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.808211 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.816951 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9" (OuterVolumeSpecName: "kube-api-access-tckq9") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "kube-api-access-tckq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.822023 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw" (OuterVolumeSpecName: "kube-api-access-jnvjw") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "kube-api-access-jnvjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.902777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903139 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903280 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903292 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903302 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903314 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903325 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903336 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903348 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903360 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903368 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.904121 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.904332 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.908722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.931640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.941339 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.160942 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:48 crc kubenswrapper[4775]: W0123 14:09:48.165660 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7ab4aa6_c476_4952_a259_e1e63a42bb69.slice/crio-0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f WatchSource:0}: Error finding container 0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f: Status 404 returned error can't find the container with id 0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.405896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerStarted","Data":"781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.406242 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.406254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerStarted","Data":"0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407483 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407499 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerDied","Data":"9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407582 4775 scope.go:117] "RemoveContainer" containerID="17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.409256 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerDied","Data":"b2254ed186345bce63999097f48f317e358a300d29e3faa324494ca3b29d1a75"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.409310 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.424928 4775 scope.go:117] "RemoveContainer" containerID="07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.431090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" podStartSLOduration=2.43107599 podStartE2EDuration="2.43107599s" podCreationTimestamp="2026-01-23 14:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:48.43043315 +0000 UTC m=+335.425261910" watchObservedRunningTime="2026-01-23 14:09:48.43107599 +0000 UTC m=+335.425904730" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.440895 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.445617 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.455364 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.458540 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.544916 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.724163 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" path="/var/lib/kubelet/pods/2bd0778a-d1f2-417f-acc4-e6cb92c96f45/volumes" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.729329 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" path="/var/lib/kubelet/pods/3d30dd02-24bf-444b-bf37-a01716591d49/volumes" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.914550 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.079497 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.080868 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.084641 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.085088 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.085924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.086394 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.086664 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.089348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.099038 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.101372 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.231967 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232044 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232088 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232136 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.333933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334036 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334124 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334161 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.336536 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.336686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.338330 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.350768 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.368150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.409325 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.694161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.427882 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerStarted","Data":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.428194 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerStarted","Data":"7d9751b50e071ccb4609de4a7a32972dacdb52e1f06dc5123bad488447e2ce18"} Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.429456 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.435143 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.456123 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" podStartSLOduration=5.456096826 podStartE2EDuration="5.456096826s" podCreationTimestamp="2026-01-23 14:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:51.451986815 +0000 UTC m=+338.446815585" watchObservedRunningTime="2026-01-23 14:09:51.456096826 +0000 UTC m=+338.450925576" Jan 23 14:09:53 crc kubenswrapper[4775]: I0123 14:09:53.219730 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:09:53 crc kubenswrapper[4775]: I0123 14:09:53.219881 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.410624 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.412156 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.432733 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.567832 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568260 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568533 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569039 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569377 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.570230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.602226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677895 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.680280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.680874 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.682470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.688778 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.694754 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.709967 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.713085 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.735851 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.196625 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:19 crc kubenswrapper[4775]: W0123 14:10:19.203702 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5d763e_6546_4013_9a83_b3c24c48d8bb.slice/crio-493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d WatchSource:0}: Error finding container 493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d: Status 404 returned error can't find the container with id 493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" event={"ID":"7f5d763e-6546-4013-9a83-b3c24c48d8bb","Type":"ContainerStarted","Data":"adbaf0b34ee6fc3e6a5ca459e4fefdddc60f614f4c1583b24f4330225ec9c59d"} Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" event={"ID":"7f5d763e-6546-4013-9a83-b3c24c48d8bb","Type":"ContainerStarted","Data":"493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d"} Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602998 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:23 crc kubenswrapper[4775]: I0123 14:10:23.219307 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:10:23 crc kubenswrapper[4775]: I0123 14:10:23.219915 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.564837 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" podStartSLOduration=8.56479648 podStartE2EDuration="8.56479648s" podCreationTimestamp="2026-01-23 14:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:19.622948123 +0000 UTC m=+366.617776873" watchObservedRunningTime="2026-01-23 14:10:26.56479648 +0000 UTC m=+373.559625230" Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.565586 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.565781 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" containerID="cri-o://0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" gracePeriod=30 Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.031205 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174094 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174175 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175027 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175055 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.176026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config" (OuterVolumeSpecName: "config") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.181039 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg" (OuterVolumeSpecName: "kube-api-access-b2gwg") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "kube-api-access-b2gwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.185481 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275820 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275866 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275878 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275890 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275904 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659436 4775 generic.go:334] "Generic (PLEG): container finished" podID="f2b9e347-4937-4835-b496-178073507714" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" exitCode=0 Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerDied","Data":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659503 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerDied","Data":"7d9751b50e071ccb4609de4a7a32972dacdb52e1f06dc5123bad488447e2ce18"} Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659522 4775 scope.go:117] "RemoveContainer" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659526 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.687883 4775 scope.go:117] "RemoveContainer" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: E0123 14:10:27.688597 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": container with ID starting with 0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0 not found: ID does not exist" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.688634 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} err="failed to get container status \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": rpc error: code = NotFound desc = could not find container \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": container with ID starting with 0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0 not found: ID does not exist" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.707315 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.712911 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.726546 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b9e347-4937-4835-b496-178073507714" path="/var/lib/kubelet/pods/f2b9e347-4937-4835-b496-178073507714/volumes" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.111630 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: E0123 14:10:28.112132 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.112160 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.112314 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.113005 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.115433 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.117790 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.118333 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.118587 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.120907 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.121065 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.127098 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.131169 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290673 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290753 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.291006 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392382 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392424 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392470 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393617 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393866 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.403916 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.419083 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.441151 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.844049 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: W0123 14:10:28.852620 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0875fb84_cf98_476b_9330_e28814be3bfe.slice/crio-776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a WatchSource:0}: Error finding container 776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a: Status 404 returned error can't find the container with id 776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.673693 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" event={"ID":"0875fb84-cf98-476b-9330-e28814be3bfe","Type":"ContainerStarted","Data":"6e88197c7c21036b5ee13c2b295b846ddf6410d5ce528b60103ac45da754af27"} Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.673774 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" event={"ID":"0875fb84-cf98-476b-9330-e28814be3bfe","Type":"ContainerStarted","Data":"776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a"} Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.674119 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.681572 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.696324 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" podStartSLOduration=3.6962912279999998 podStartE2EDuration="3.696291228s" podCreationTimestamp="2026-01-23 14:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:29.693507719 +0000 UTC m=+376.688336449" watchObservedRunningTime="2026-01-23 14:10:29.696291228 +0000 UTC m=+376.691120008" Jan 23 14:10:38 crc kubenswrapper[4775]: I0123 14:10:38.741991 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:38 crc kubenswrapper[4775]: I0123 14:10:38.801124 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.345419 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.347457 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" containerID="cri-o://5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.356824 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.357688 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" containerID="cri-o://c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.370565 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.371017 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" containerID="cri-o://f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.384024 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.384412 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" containerID="cri-o://706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.396067 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.396327 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.397076 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.402758 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.440332 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" containerID="cri-o://d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.470268 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471098 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471523 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471554 4775 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499781 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499896 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601526 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601554 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.602999 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.609635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.623790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.798313 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.798390 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.800336 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.800395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.802241 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.802289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.803607 4775 generic.go:334] "Generic (PLEG): container finished" podID="8ac48e42-bde7-4701-b994-825906603b06" containerID="f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.803670 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerDied","Data":"f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.805716 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.805756 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.871726 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.874326 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.014309 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016652 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016692 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016730 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.017637 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities" (OuterVolumeSpecName: "utilities") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.024710 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.027243 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf" (OuterVolumeSpecName: "kube-api-access-vnxtf") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "kube-api-access-vnxtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.069476 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.077474 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.083747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118451 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118623 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118667 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.119601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.119677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities" (OuterVolumeSpecName: "utilities") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120121 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120166 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120179 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120186 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120196 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.121018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.122197 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm" (OuterVolumeSpecName: "kube-api-access-f2lfm") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "kube-api-access-f2lfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.124099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t" (OuterVolumeSpecName: "kube-api-access-bwv8t") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "kube-api-access-bwv8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.202173 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221380 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221468 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221542 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221559 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221665 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221894 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221912 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221922 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221932 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221941 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.222307 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities" (OuterVolumeSpecName: "utilities") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.222853 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities" (OuterVolumeSpecName: "utilities") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.225163 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h" (OuterVolumeSpecName: "kube-api-access-h2k5h") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "kube-api-access-h2k5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.225644 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66" (OuterVolumeSpecName: "kube-api-access-phm66") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "kube-api-access-phm66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.251461 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323557 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323609 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323620 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323631 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323641 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.358579 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.407337 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.424644 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.734916 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.734980 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813091 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813166 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813874 4775 scope.go:117] "RemoveContainer" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.815571 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.815655 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.819571 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.819573 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"3666244710ce45438b030ced5df57918d02f4be6ca49d93c06949ae50a2a548e"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.821121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerDied","Data":"14f4d6283aff6de605f724a865763d27a0a448211bbacd5d102fb5562e6f44ef"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.821215 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.823397 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.823451 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825747 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" event={"ID":"ffa6638c-aaa0-418b-ad22-e5532ae16f68","Type":"ContainerStarted","Data":"f94ffef5cffeb674ca96c38d4958c9570180370c8682636ebe6553c7d4d8066d"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825772 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" event={"ID":"ffa6638c-aaa0-418b-ad22-e5532ae16f68","Type":"ContainerStarted","Data":"308617b5db3c6fab4969661f3b3eff4fa11db923b836f0b38c2f7187515fce6f"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.827882 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-24s7d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" start-of-body= Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.827949 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" podUID="ffa6638c-aaa0-418b-ad22-e5532ae16f68" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.833195 4775 scope.go:117] "RemoveContainer" containerID="cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.844681 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.856265 4775 scope.go:117] "RemoveContainer" containerID="b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.856441 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.877704 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" podStartSLOduration=1.877679533 podStartE2EDuration="1.877679533s" podCreationTimestamp="2026-01-23 14:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:51.865586125 +0000 UTC m=+398.860414865" watchObservedRunningTime="2026-01-23 14:10:51.877679533 +0000 UTC m=+398.872508273" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.881561 4775 scope.go:117] "RemoveContainer" containerID="5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.886044 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.890094 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.896657 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.901849 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.903781 4775 scope.go:117] "RemoveContainer" containerID="b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.910933 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.914014 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.929764 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.933748 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.941762 4775 scope.go:117] "RemoveContainer" containerID="1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.958029 4775 scope.go:117] "RemoveContainer" containerID="c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.979308 4775 scope.go:117] "RemoveContainer" containerID="e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.994536 4775 scope.go:117] "RemoveContainer" containerID="c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.009011 4775 scope.go:117] "RemoveContainer" containerID="f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.021921 4775 scope.go:117] "RemoveContainer" containerID="d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.040060 4775 scope.go:117] "RemoveContainer" containerID="5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.056132 4775 scope.go:117] "RemoveContainer" containerID="33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.843343 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.169860 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.170614 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.170754 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.170908 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171069 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171198 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171311 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171426 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171536 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171652 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171776 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171946 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172061 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172174 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172295 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172566 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172898 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174191 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.174334 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174469 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.174599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174992 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.175260 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.175454 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.175622 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.175742 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176072 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176231 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176353 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176651 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176790 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.178048 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.178247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.182681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218536 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218589 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218626 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.219307 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.219373 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" gracePeriod=600 Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254366 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254416 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355296 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355352 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.356145 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.356393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.380229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.498452 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.727896 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" path="/var/lib/kubelet/pods/0e3253a9-fac0-401c-8e02-52758dbc40f3/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.729179 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" path="/var/lib/kubelet/pods/1b219edd-2ebd-4968-b427-ec555eade68c/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.735611 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac48e42-bde7-4701-b994-825906603b06" path="/var/lib/kubelet/pods/8ac48e42-bde7-4701-b994-825906603b06/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.736783 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" path="/var/lib/kubelet/pods/8bb5169a-229e-4d38-beea-4783c11d0098/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.739073 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" path="/var/lib/kubelet/pods/e59d5724-424f-4151-98a4-c2cfa3918ac0/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846083 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" exitCode=0 Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846143 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846306 4775 scope.go:117] "RemoveContainer" containerID="69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.929968 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: W0123 14:10:53.937526 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f7bf95_e60c_4dbb_bb9b_0a7c038871f5.slice/crio-7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed WatchSource:0}: Error finding container 7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed: Status 404 returned error can't find the container with id 7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.164101 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.165150 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.170632 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.178532 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265400 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367136 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367476 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367657 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.368028 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.368590 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.395000 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.497254 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.855978 4775 generic.go:334] "Generic (PLEG): container finished" podID="d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5" containerID="9989f4fe62a0e1a80697783a84696d42ebb144b7ea9072d980c54c388c525362" exitCode=0 Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.856072 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerDied","Data":"9989f4fe62a0e1a80697783a84696d42ebb144b7ea9072d980c54c388c525362"} Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.856139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerStarted","Data":"7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed"} Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.967417 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.569404 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.570686 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.576329 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.579435 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685638 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787433 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787462 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787663 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787852 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.809148 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.867448 4775 generic.go:334] "Generic (PLEG): container finished" podID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerID="49eae8b296f7a930d3ad9eb1d232b6b88d388c8ed6fa7354489e7e0745b32b91" exitCode=0 Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.867559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerDied","Data":"49eae8b296f7a930d3ad9eb1d232b6b88d388c8ed6fa7354489e7e0745b32b91"} Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.868494 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerStarted","Data":"193db82826c761a02002446dfeffdbc415e5b21166d432e69177b9b669bcaa15"} Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.901891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.333539 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:56 crc kubenswrapper[4775]: W0123 14:10:56.339255 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded5c162e_62a9_4760_b5e0_a249a70225a0.slice/crio-7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b WatchSource:0}: Error finding container 7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b: Status 404 returned error can't find the container with id 7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.567673 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.571008 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.573545 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.574262 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700073 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801755 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801881 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.802441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.807022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.827664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.875817 4775 generic.go:334] "Generic (PLEG): container finished" podID="ed5c162e-62a9-4760-b5e0-a249a70225a0" containerID="b1ed211a24486fc778a0c4e86d565a72a63e7d607df308faee3143b25d118281" exitCode=0 Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.875876 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerDied","Data":"b1ed211a24486fc778a0c4e86d565a72a63e7d607df308faee3143b25d118281"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.876068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerStarted","Data":"7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.887129 4775 generic.go:334] "Generic (PLEG): container finished" podID="d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5" containerID="a39528741de9229819cfbf91ec99690572fd8296ff83d569dd5ae78787787e9e" exitCode=0 Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.887219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerDied","Data":"a39528741de9229819cfbf91ec99690572fd8296ff83d569dd5ae78787787e9e"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.888010 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.368097 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:57 crc kubenswrapper[4775]: W0123 14:10:57.383203 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bc9387_f295_4aec_ad66_8831265c0400.slice/crio-fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b WatchSource:0}: Error finding container fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b: Status 404 returned error can't find the container with id fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895007 4775 generic.go:334] "Generic (PLEG): container finished" podID="39bc9387-f295-4aec-ad66-8831265c0400" containerID="2ca374f668aa98ec92160c88d95aa0bf42cc77656b24f0b3a81251c876059f6d" exitCode=0 Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895306 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerDied","Data":"2ca374f668aa98ec92160c88d95aa0bf42cc77656b24f0b3a81251c876059f6d"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895332 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerStarted","Data":"fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.903435 4775 generic.go:334] "Generic (PLEG): container finished" podID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerID="0e0ade394de3c5d4b6ec38f9d3ab7dec24f5000eeea85a3447b591f5dd1b8390" exitCode=0 Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.903545 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerDied","Data":"0e0ade394de3c5d4b6ec38f9d3ab7dec24f5000eeea85a3447b591f5dd1b8390"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.909197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerStarted","Data":"c428da0ffb577d1a5b9dfe716486a04460434844ab13ea932f708b3e9c7dd709"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.962375 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bb2pb" podStartSLOduration=2.493635352 podStartE2EDuration="4.962358308s" podCreationTimestamp="2026-01-23 14:10:53 +0000 UTC" firstStartedPulling="2026-01-23 14:10:54.858599842 +0000 UTC m=+401.853428582" lastFinishedPulling="2026-01-23 14:10:57.327322808 +0000 UTC m=+404.322151538" observedRunningTime="2026-01-23 14:10:57.959590606 +0000 UTC m=+404.954419366" watchObservedRunningTime="2026-01-23 14:10:57.962358308 +0000 UTC m=+404.957187048" Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.928759 4775 generic.go:334] "Generic (PLEG): container finished" podID="ed5c162e-62a9-4760-b5e0-a249a70225a0" containerID="f4e7579e99a37bc77c51aec07b40d31bbecb98f6aa3e493ddef12ea82b70776e" exitCode=0 Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.928886 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerDied","Data":"f4e7579e99a37bc77c51aec07b40d31bbecb98f6aa3e493ddef12ea82b70776e"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.951031 4775 generic.go:334] "Generic (PLEG): container finished" podID="39bc9387-f295-4aec-ad66-8831265c0400" containerID="24a20544bd98c05044377edf9951f09561f4ecff0d2541728a199f5d87991f32" exitCode=0 Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.951168 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerDied","Data":"24a20544bd98c05044377edf9951f09561f4ecff0d2541728a199f5d87991f32"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.969253 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerStarted","Data":"1b2aabd99ad88932381935bc241ac835de5067e7f97071950b5736763e3d2bce"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.991702 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sx4qm" podStartSLOduration=2.407028284 podStartE2EDuration="4.991678641s" podCreationTimestamp="2026-01-23 14:10:54 +0000 UTC" firstStartedPulling="2026-01-23 14:10:55.869341196 +0000 UTC m=+402.864169936" lastFinishedPulling="2026-01-23 14:10:58.453991553 +0000 UTC m=+405.448820293" observedRunningTime="2026-01-23 14:10:58.986364594 +0000 UTC m=+405.981193354" watchObservedRunningTime="2026-01-23 14:10:58.991678641 +0000 UTC m=+405.986507381" Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.974905 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerStarted","Data":"009c56fbfdb39c22bbd66c058979011943b0707f6f91b40e918b6846e71897ff"} Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.977844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerStarted","Data":"8c3a05ad5d8d3703f17b3d70b8b67f738074138eb987127c6dd602fb6cf5f591"} Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.993208 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fxcrw" podStartSLOduration=2.149992542 podStartE2EDuration="3.9931894s" podCreationTimestamp="2026-01-23 14:10:56 +0000 UTC" firstStartedPulling="2026-01-23 14:10:57.897406825 +0000 UTC m=+404.892235565" lastFinishedPulling="2026-01-23 14:10:59.740603683 +0000 UTC m=+406.735432423" observedRunningTime="2026-01-23 14:10:59.990242793 +0000 UTC m=+406.985071543" watchObservedRunningTime="2026-01-23 14:10:59.9931894 +0000 UTC m=+406.988018140" Jan 23 14:11:00 crc kubenswrapper[4775]: I0123 14:11:00.014538 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jjcj" podStartSLOduration=2.543099527 podStartE2EDuration="5.014519962s" podCreationTimestamp="2026-01-23 14:10:55 +0000 UTC" firstStartedPulling="2026-01-23 14:10:56.881753338 +0000 UTC m=+403.876582118" lastFinishedPulling="2026-01-23 14:10:59.353173813 +0000 UTC m=+406.348002553" observedRunningTime="2026-01-23 14:11:00.012013128 +0000 UTC m=+407.006841888" watchObservedRunningTime="2026-01-23 14:11:00.014519962 +0000 UTC m=+407.009348702" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.498785 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.499577 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.571751 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.869312 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" containerID="cri-o://4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" gracePeriod=30 Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.047672 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.497587 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.497635 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.828678 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921861 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921937 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921967 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922044 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922067 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922238 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922259 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.923306 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.923631 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.929188 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.932179 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.932996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx" (OuterVolumeSpecName: "kube-api-access-hkptx") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "kube-api-access-hkptx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.933156 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.933595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.951061 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006676 4775 generic.go:334] "Generic (PLEG): container finished" podID="85b405af-7314-4e53-93a5-252b69153561" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" exitCode=0 Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006754 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006821 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerDied","Data":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerDied","Data":"b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4"} Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006895 4775 scope.go:117] "RemoveContainer" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023463 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023484 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023493 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023502 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024183 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024216 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024226 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.035983 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.038263 4775 scope.go:117] "RemoveContainer" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: E0123 14:11:05.038725 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": container with ID starting with 4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e not found: ID does not exist" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.038753 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} err="failed to get container status \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": rpc error: code = NotFound desc = could not find container \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": container with ID starting with 4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e not found: ID does not exist" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.041363 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.530333 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sx4qm" podUID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerName="registry-server" probeResult="failure" output=< Jan 23 14:11:05 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:11:05 crc kubenswrapper[4775]: > Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.726539 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b405af-7314-4e53-93a5-252b69153561" path="/var/lib/kubelet/pods/85b405af-7314-4e53-93a5-252b69153561/volumes" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.903255 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.903332 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.944655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.049735 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.558949 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.559140 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" containerID="cri-o://781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" gracePeriod=30 Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.889036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.889100 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.929384 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.020965 4775 generic.go:334] "Generic (PLEG): container finished" podID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerID="781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" exitCode=0 Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.021857 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerDied","Data":"781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea"} Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.067693 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.451394 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559599 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559720 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559743 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.560873 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config" (OuterVolumeSpecName: "config") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.561112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.565493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.566175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g" (OuterVolumeSpecName: "kube-api-access-dch9g") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "kube-api-access-dch9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661161 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661208 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661223 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661237 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.030794 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.031560 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerDied","Data":"0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f"} Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.031625 4775 scope.go:117] "RemoveContainer" containerID="781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.056098 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.064600 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124428 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:08 crc kubenswrapper[4775]: E0123 14:11:08.124652 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124666 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: E0123 14:11:08.124682 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124691 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124825 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124838 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.125254 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127346 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127411 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127571 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127596 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127949 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.131045 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.137837 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166417 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166471 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166660 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268170 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268211 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.269377 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.269721 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.284002 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.290437 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.469529 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.924113 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:09 crc kubenswrapper[4775]: I0123 14:11:09.037207 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" event={"ID":"551df5d9-a597-45dd-bee6-d189f022e455","Type":"ContainerStarted","Data":"cb31d93d1fc5e5f6be637dbe5d5d830acc2a27cabe7f04f8d82c7df7921b9df1"} Jan 23 14:11:09 crc kubenswrapper[4775]: I0123 14:11:09.720486 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" path="/var/lib/kubelet/pods/d7ab4aa6-c476-4952-a259-e1e63a42bb69/volumes" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.044417 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" event={"ID":"551df5d9-a597-45dd-bee6-d189f022e455","Type":"ContainerStarted","Data":"247c01734aec8198b78c8d30ee090eb949696c01452950dc2c29c4a7df0b82eb"} Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.044671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.050852 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.063666 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" podStartSLOduration=4.06364858 podStartE2EDuration="4.06364858s" podCreationTimestamp="2026-01-23 14:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:11:10.062515677 +0000 UTC m=+417.057344457" watchObservedRunningTime="2026-01-23 14:11:10.06364858 +0000 UTC m=+417.058477320" Jan 23 14:11:14 crc kubenswrapper[4775]: I0123 14:11:14.558571 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:14 crc kubenswrapper[4775]: I0123 14:11:14.599657 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:12:53 crc kubenswrapper[4775]: I0123 14:12:53.219581 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:12:53 crc kubenswrapper[4775]: I0123 14:12:53.220924 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:23 crc kubenswrapper[4775]: I0123 14:13:23.219295 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:13:23 crc kubenswrapper[4775]: I0123 14:13:23.220233 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.219922 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.220554 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.220621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.221586 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.221691 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" gracePeriod=600 Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.116295 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" exitCode=0 Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.116406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.117111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.117149 4775 scope.go:117] "RemoveContainer" containerID="64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.225780 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.228048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.235724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.236272 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.236582 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418313 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418784 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520762 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520894 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.522759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.531778 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.551960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.575316 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.036580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564270 4775 generic.go:334] "Generic (PLEG): container finished" podID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerID="15a8d47f089ab5d6d8e17473d5ab659be4c60d627bd808897d5ab6a5904a76cc" exitCode=0 Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564360 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerDied","Data":"15a8d47f089ab5d6d8e17473d5ab659be4c60d627bd808897d5ab6a5904a76cc"} Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerStarted","Data":"b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2"} Jan 23 14:15:02 crc kubenswrapper[4775]: I0123 14:15:02.891583 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.060465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.060938 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.061001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.061437 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.067500 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7" (OuterVolumeSpecName: "kube-api-access-8jdf7") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "kube-api-access-8jdf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.068528 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162888 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162924 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162937 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerDied","Data":"b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2"} Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579576 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579623 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:53 crc kubenswrapper[4775]: I0123 14:15:53.219557 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:15:53 crc kubenswrapper[4775]: I0123 14:15:53.220479 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:23 crc kubenswrapper[4775]: I0123 14:16:23.220228 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:16:23 crc kubenswrapper[4775]: I0123 14:16:23.221387 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.219571 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.220922 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.221791 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.224669 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.224832 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" gracePeriod=600 Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.330627 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" exitCode=0 Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.330730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.331288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.331326 4775 scope.go:117] "RemoveContainer" containerID="3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" Jan 23 14:16:55 crc kubenswrapper[4775]: I0123 14:16:55.233680 4775 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.459383 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:29 crc kubenswrapper[4775]: E0123 14:17:29.460374 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.460392 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.460497 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.461260 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.463130 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.474482 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576082 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576623 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678310 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678430 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.679574 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.682198 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.711032 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.775610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.973470 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570865 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="aeec217b1013090323ad7b543d66684166ebbe2f392de79d83faa5baea26b0a7" exitCode=0 Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"aeec217b1013090323ad7b543d66684166ebbe2f392de79d83faa5baea26b0a7"} Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerStarted","Data":"0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582"} Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.573207 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.550107 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.552370 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.568933 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708557 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708892 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.809956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810054 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.811047 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.842080 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.898781 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.093420 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:32 crc kubenswrapper[4775]: W0123 14:17:32.097335 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748a9ff6_4b80_40f9_ae41_37bc66c272f6.slice/crio-93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358 WatchSource:0}: Error finding container 93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358: Status 404 returned error can't find the container with id 93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.584293 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="dea1dd28a2731b8f977fbd369d244552810d648501576d68e727344cd0d1e33e" exitCode=0 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.584373 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"dea1dd28a2731b8f977fbd369d244552810d648501576d68e727344cd0d1e33e"} Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587170 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a" exitCode=0 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587213 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a"} Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587238 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerStarted","Data":"93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358"} Jan 23 14:17:33 crc kubenswrapper[4775]: I0123 14:17:33.596053 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="ec4a2ae89c81b2c9040c94e5b02d11291b835380df3a78ae667e5991cc2029bc" exitCode=0 Jan 23 14:17:33 crc kubenswrapper[4775]: I0123 14:17:33.596362 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"ec4a2ae89c81b2c9040c94e5b02d11291b835380df3a78ae667e5991cc2029bc"} Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.605246 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f" exitCode=0 Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.605374 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f"} Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.858980 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055373 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055478 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.057332 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle" (OuterVolumeSpecName: "bundle") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.065114 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s" (OuterVolumeSpecName: "kube-api-access-j7x8s") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "kube-api-access-j7x8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.069927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util" (OuterVolumeSpecName: "util") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157750 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157881 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157901 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.613716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerStarted","Data":"f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5"} Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582"} Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616150 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616151 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.642371 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lflht" podStartSLOduration=2.130203808 podStartE2EDuration="4.642351755s" podCreationTimestamp="2026-01-23 14:17:31 +0000 UTC" firstStartedPulling="2026-01-23 14:17:32.588351788 +0000 UTC m=+799.583180548" lastFinishedPulling="2026-01-23 14:17:35.100499745 +0000 UTC m=+802.095328495" observedRunningTime="2026-01-23 14:17:35.640986166 +0000 UTC m=+802.635814946" watchObservedRunningTime="2026-01-23 14:17:35.642351755 +0000 UTC m=+802.637180495" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066207 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066445 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="util" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066461 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="util" Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066476 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066498 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="pull" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066505 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="pull" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066620 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.067067 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069235 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069270 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9jjmk" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.076588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.190733 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.291973 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.311190 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.402392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.604573 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.626145 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" event={"ID":"ebe0482d-2988-4f4d-929f-4c2980e19cf3","Type":"ContainerStarted","Data":"81e80e3ac5a5a67f5e8c1f2e60f5c610745d3f01670b445fa53431eaec080877"} Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859352 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859779 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" containerID="cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859888 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" containerID="cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859941 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" containerID="cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859917 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" containerID="cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.860056 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" containerID="cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.860083 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" containerID="cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859878 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.907345 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" containerID="cri-o://9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" gracePeriod=30 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.150754 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.153266 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-acl-logging/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.153767 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-controller/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.154352 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.204860 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vdg25"] Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205058 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205069 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205080 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205087 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205093 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205099 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205107 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205112 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205121 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205126 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205136 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kubecfg-setup" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205142 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kubecfg-setup" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205153 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205158 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205169 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205175 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205186 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205192 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205198 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205204 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205213 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205218 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205296 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205307 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205313 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205320 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205326 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205335 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205341 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205347 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205353 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205361 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205369 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205375 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205454 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205460 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205632 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.207001 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220332 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220409 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220450 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220481 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220509 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220559 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220577 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log" (OuterVolumeSpecName: "node-log") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220604 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220670 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220700 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220747 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220603 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220756 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220774 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220823 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220828 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220842 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220839 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220868 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220878 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220863 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220938 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220854 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220840 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220968 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220993 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221015 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221043 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221085 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221127 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221240 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221281 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221369 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221419 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221540 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221579 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221677 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221732 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221865 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221888 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221968 4775 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222010 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222026 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222036 4775 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222049 4775 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222058 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222068 4775 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222078 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222087 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222097 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222105 4775 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222114 4775 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222122 4775 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222131 4775 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222140 4775 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222149 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222159 4775 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.225908 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.225927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls" (OuterVolumeSpecName: "kube-api-access-d6jls") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "kube-api-access-d6jls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.237452 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324034 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324192 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324210 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324409 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324432 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324445 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324565 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324597 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324622 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324696 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324716 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324799 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324882 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324900 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324986 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325000 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325018 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325096 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325118 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325157 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325200 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325241 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325256 4775 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325265 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325512 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.326063 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.326298 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.328266 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.347499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.519050 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: W0123 14:17:39.553465 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38c6f656_0f2d_4615_821c_f4aee4c9e2c3.slice/crio-863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87 WatchSource:0}: Error finding container 863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87: Status 404 returned error can't find the container with id 863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.636870 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/2.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637229 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637261 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510" exitCode=2 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637344 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637766 4775 scope.go:117] "RemoveContainer" containerID="555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.641639 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.644614 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-acl-logging/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645094 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-controller/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645377 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645398 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645406 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645413 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645422 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645430 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645437 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" exitCode=143 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645444 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" exitCode=143 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645506 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645534 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645553 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645562 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645568 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645575 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645581 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645586 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645592 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645597 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645602 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645607 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645621 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645627 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645633 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645638 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645644 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645648 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645654 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645659 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645664 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645668 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645682 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645688 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645693 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645698 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645703 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645708 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645713 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645718 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645722 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645727 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645741 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645748 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645752 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645757 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645763 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645768 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645773 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645777 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645782 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645787 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.652117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.682723 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.683932 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.720232 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" path="/var/lib/kubelet/pods/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/volumes" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.893392 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.913473 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.950090 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.968457 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.037529 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.056424 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.075232 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.099714 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.123115 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.146752 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.165509 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.169248 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169305 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169340 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.169594 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169625 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169640 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.170077 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170124 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170151 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.170640 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170665 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170681 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171132 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171180 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171208 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171567 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171589 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171602 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171931 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171958 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171976 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.172328 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.172355 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.172384 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.173777 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.173834 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.173850 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.174161 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.174192 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.174208 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.175757 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.175858 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176153 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176175 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176408 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176452 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181025 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181111 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181603 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181648 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181904 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181932 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182227 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182272 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182618 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182652 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182964 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182994 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183239 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183264 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183449 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183473 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183863 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183895 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184356 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184396 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184945 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184979 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185418 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185472 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185850 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185878 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186192 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186230 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186528 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186551 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186856 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186889 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.189508 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.189548 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190052 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190096 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190524 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190553 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191005 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191274 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191306 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191707 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191734 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.192108 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.192133 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.195881 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.195916 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.196816 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.196840 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197225 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197243 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197584 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.665363 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/2.log" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.665530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"35159c6e24dab15d013038099a26fcbb008c7f6a1f958150f802dcc8702b8506"} Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.671430 4775 generic.go:334] "Generic (PLEG): container finished" podID="38c6f656-0f2d-4615-821c-f4aee4c9e2c3" containerID="9187daf8b686d8372af7baba945baeca89c0029515684f3dc91d3d96357f2bd9" exitCode=0 Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.671638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerDied","Data":"9187daf8b686d8372af7baba945baeca89c0029515684f3dc91d3d96357f2bd9"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.681444 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"c35e5d38448e4975a03c78fd7f211148dd37ac8f2cda317b3c4153145a65cc9a"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682094 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"8cb457df4eac4e589747ab484a5f807a39d18142f6cfa48cc0ff893acd85c539"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682107 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"460bdc7dc880ad11053b17f10bffd40511d00b069076ba0c6d97ebdded4c96d4"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682114 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"985dc868e6f32f76610ed213e4cd7a7c2421f288864e26c3fa1b4e44980591be"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"68b5e32ac49da3b1c5b9c57952bd51944a321e88012f4a40c374887d5bab9567"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"8c8608b88344e136cf1fa734e7f5bdebac9bd2bc412bd4c50f402123db06cd65"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.898887 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.899445 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.948227 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:42 crc kubenswrapper[4775]: I0123 14:17:42.733272 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:44 crc kubenswrapper[4775]: I0123 14:17:44.336322 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:44 crc kubenswrapper[4775]: I0123 14:17:44.705728 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"a9de6b148a62eab5a9ad5811a12641fe4c9c27f232074aac3906b4345df10b51"} Jan 23 14:17:45 crc kubenswrapper[4775]: I0123 14:17:45.713174 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lflht" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" containerID="cri-o://f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" gracePeriod=2 Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"159bb01d7666ae4f6e0c86adb311016ed0b8ba730d61bfaab8249999a1a855b6"} Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725764 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.753622 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.757083 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" podStartSLOduration=7.757068566 podStartE2EDuration="7.757068566s" podCreationTimestamp="2026-01-23 14:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:17:46.756707855 +0000 UTC m=+813.751536595" watchObservedRunningTime="2026-01-23 14:17:46.757068566 +0000 UTC m=+813.751897306" Jan 23 14:17:47 crc kubenswrapper[4775]: I0123 14:17:47.731416 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:47 crc kubenswrapper[4775]: I0123 14:17:47.774766 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:49 crc kubenswrapper[4775]: I0123 14:17:49.745858 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" exitCode=0 Jan 23 14:17:49 crc kubenswrapper[4775]: I0123 14:17:49.745934 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.392722 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508212 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508267 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508320 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.509692 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities" (OuterVolumeSpecName: "utilities") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.515757 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj" (OuterVolumeSpecName: "kube-api-access-4gpcj") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "kube-api-access-4gpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.610261 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.610519 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.657020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.711594 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761105 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761156 4775 scope.go:117] "RemoveContainer" containerID="f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.762832 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" event={"ID":"ebe0482d-2988-4f4d-929f-4c2980e19cf3","Type":"ContainerStarted","Data":"4a4c52e9e34702af099491e1040d0c536534dd8bfeeb011dd44cfac84f07079a"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.792445 4775 scope.go:117] "RemoveContainer" containerID="61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.796947 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" podStartSLOduration=1.012735561 podStartE2EDuration="14.79691875s" podCreationTimestamp="2026-01-23 14:17:37 +0000 UTC" firstStartedPulling="2026-01-23 14:17:37.611194821 +0000 UTC m=+804.606023561" lastFinishedPulling="2026-01-23 14:17:51.39537797 +0000 UTC m=+818.390206750" observedRunningTime="2026-01-23 14:17:51.792829442 +0000 UTC m=+818.787658252" watchObservedRunningTime="2026-01-23 14:17:51.79691875 +0000 UTC m=+818.791747520" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.821728 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.829722 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.831772 4775 scope.go:117] "RemoveContainer" containerID="f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807537 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807784 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-content" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807851 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-content" Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807867 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-utilities" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807878 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-utilities" Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807893 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807901 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.808043 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.808690 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.811410 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nrpjz" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.829293 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.832188 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.838752 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.844017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.866074 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wmglj"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.866880 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.881478 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925379 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925492 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925544 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.944828 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.945559 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.948038 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-srpxs" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.948302 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.950104 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.960869 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026521 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026542 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026559 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026576 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026602 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026680 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026699 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026761 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: E0123 14:17:53.026940 4775 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: E0123 14:17:53.027026 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair podName:6932e29c-8eac-4e0f-9516-c2e922655cbc nodeName:}" failed. No retries permitted until 2026-01-23 14:17:53.526991664 +0000 UTC m=+820.521820444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rnbff" (UID: "6932e29c-8eac-4e0f-9516-c2e922655cbc") : secret "openshift-nmstate-webhook" not found Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.050331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.059279 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.064499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127861 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127947 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.128900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.131288 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.131958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.134204 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.135795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.156134 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.159038 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.182135 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229364 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229411 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229573 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229615 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229653 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.261417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331651 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331712 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.332618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.334998 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.335114 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.335539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.338160 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.338381 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.352457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.394905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.455788 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.463229 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode932364d_5f85_43fd_ba05_f4e0934482c2.slice/crio-4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb WatchSource:0}: Error finding container 4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb: Status 404 returned error can't find the container with id 4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.477826 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.535143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.539622 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.646134 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.652845 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd2a7f1_33df_47f9_8482_153d9e0beeb8.slice/crio-1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8 WatchSource:0}: Error finding container 1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8: Status 404 returned error can't find the container with id 1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8 Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.727317 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" path="/var/lib/kubelet/pods/748a9ff6-4b80-40f9-ae41-37bc66c272f6/volumes" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.748328 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.779716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wmglj" event={"ID":"18100557-00ef-4de8-9a7f-df953190a9c6","Type":"ContainerStarted","Data":"b3d3c1c6fbdcb239d8fe5a45d103295d6b5151975c3b3e109348e94e483dc186"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.781242 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"5154885423dddf6d8bc166f5cb8ad2830da14db2cf4b67f8388d24561954b5d1"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.782822 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" event={"ID":"e932364d-5f85-43fd-ba05-f4e0934482c2","Type":"ContainerStarted","Data":"4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.785401 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c865d7849-c7sv9" event={"ID":"efd2a7f1-33df-47f9-8482-153d9e0beeb8","Type":"ContainerStarted","Data":"1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.807638 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c865d7849-c7sv9" podStartSLOduration=0.807613653 podStartE2EDuration="807.613653ms" podCreationTimestamp="2026-01-23 14:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:17:53.801160466 +0000 UTC m=+820.795989216" watchObservedRunningTime="2026-01-23 14:17:53.807613653 +0000 UTC m=+820.802442393" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.932663 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.937783 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6932e29c_8eac_4e0f_9516_c2e922655cbc.slice/crio-a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372 WatchSource:0}: Error finding container a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372: Status 404 returned error can't find the container with id a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372 Jan 23 14:17:54 crc kubenswrapper[4775]: I0123 14:17:54.795753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c865d7849-c7sv9" event={"ID":"efd2a7f1-33df-47f9-8482-153d9e0beeb8","Type":"ContainerStarted","Data":"7a000b614dd77b084e202a456815f5889d89b5a3747f0f1e7dcec6cb0a9cbac0"} Jan 23 14:17:54 crc kubenswrapper[4775]: I0123 14:17:54.797304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" event={"ID":"6932e29c-8eac-4e0f-9516-c2e922655cbc","Type":"ContainerStarted","Data":"a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.810949 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wmglj" event={"ID":"18100557-00ef-4de8-9a7f-df953190a9c6","Type":"ContainerStarted","Data":"a7e9f3aa7ecded5d64bc1f143b905ca8ff72a0d3acd447775cf5da2d439fdb10"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.811611 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.815445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"cee3a8c344075934c867c0d811a1a781233b47794d730d65cb6e22db1f428fd1"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.817610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" event={"ID":"e932364d-5f85-43fd-ba05-f4e0934482c2","Type":"ContainerStarted","Data":"d45ba1f257acb413ce18702b33b023f421ebdf37d701613aaebe894fece57856"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.820054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" event={"ID":"6932e29c-8eac-4e0f-9516-c2e922655cbc","Type":"ContainerStarted","Data":"3dd0e39889d715e1a5db8f67a6044becfbd69078c63fdecb3b7683b832ef2076"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.820259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.831204 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wmglj" podStartSLOduration=1.976204542 podStartE2EDuration="4.831186s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.200796853 +0000 UTC m=+820.195625593" lastFinishedPulling="2026-01-23 14:17:56.055778311 +0000 UTC m=+823.050607051" observedRunningTime="2026-01-23 14:17:56.830434788 +0000 UTC m=+823.825263538" watchObservedRunningTime="2026-01-23 14:17:56.831186 +0000 UTC m=+823.826014750" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.850254 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" podStartSLOduration=2.266333718 podStartE2EDuration="4.850233131s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.465372029 +0000 UTC m=+820.460200769" lastFinishedPulling="2026-01-23 14:17:56.049271442 +0000 UTC m=+823.044100182" observedRunningTime="2026-01-23 14:17:56.847635366 +0000 UTC m=+823.842464116" watchObservedRunningTime="2026-01-23 14:17:56.850233131 +0000 UTC m=+823.845061871" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.871470 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" podStartSLOduration=2.753711531 podStartE2EDuration="4.871453585s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.940284182 +0000 UTC m=+820.935112922" lastFinishedPulling="2026-01-23 14:17:56.058026236 +0000 UTC m=+823.052854976" observedRunningTime="2026-01-23 14:17:56.868462729 +0000 UTC m=+823.863291489" watchObservedRunningTime="2026-01-23 14:17:56.871453585 +0000 UTC m=+823.866282325" Jan 23 14:17:58 crc kubenswrapper[4775]: I0123 14:17:58.834583 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"09e62661f220d80a4fa2df22aaf27835af3369938efb25fd34802463aa546832"} Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.222339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.247770 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" podStartSLOduration=6.30682747 podStartE2EDuration="11.247746882s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.403065006 +0000 UTC m=+820.397893756" lastFinishedPulling="2026-01-23 14:17:58.343984428 +0000 UTC m=+825.338813168" observedRunningTime="2026-01-23 14:17:58.854522102 +0000 UTC m=+825.849350872" watchObservedRunningTime="2026-01-23 14:18:03.247746882 +0000 UTC m=+830.242575662" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.478309 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.478388 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.485890 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.872339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.999333 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:09 crc kubenswrapper[4775]: I0123 14:18:09.553706 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:18:13 crc kubenswrapper[4775]: I0123 14:18:13.758370 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.352460 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.356311 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.360989 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.365905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448288 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448356 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549675 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.550751 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.550843 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.586011 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.993924 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:29 crc kubenswrapper[4775]: W0123 14:18:29.001340 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f15de03_78a8_4158_8a06_0174d617e32b.slice/crio-358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7 WatchSource:0}: Error finding container 358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7: Status 404 returned error can't find the container with id 358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7 Jan 23 14:18:29 crc kubenswrapper[4775]: I0123 14:18:29.026389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerStarted","Data":"358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7"} Jan 23 14:18:29 crc kubenswrapper[4775]: I0123 14:18:29.055369 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" containerID="cri-o://f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" gracePeriod=15 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.012649 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgb82_a6821f92-2d15-4dc0-92ed-7a30cef98db9/console/0.log" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.013192 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.032832 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="0a352d91c01c1461c69f722c13f874d98c91d123737945ce6c53a0c87e019e94" exitCode=0 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.032899 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"0a352d91c01c1461c69f722c13f874d98c91d123737945ce6c53a0c87e019e94"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.034949 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgb82_a6821f92-2d15-4dc0-92ed-7a30cef98db9/console/0.log" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035054 4775 generic.go:334] "Generic (PLEG): container finished" podID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" exitCode=2 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035110 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerDied","Data":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerDied","Data":"ef54fd5e26cacb272f1e1be9cfe28c0c931df15d597bb7da81a47734c646362b"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035129 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035215 4775 scope.go:117] "RemoveContainer" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.073653 4775 scope.go:117] "RemoveContainer" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: E0123 14:18:30.075062 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": container with ID starting with f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442 not found: ID does not exist" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.075103 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} err="failed to get container status \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": rpc error: code = NotFound desc = could not find container \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": container with ID starting with f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442 not found: ID does not exist" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212563 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212622 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212761 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212899 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212964 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213050 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213641 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca" (OuterVolumeSpecName: "service-ca") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213732 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config" (OuterVolumeSpecName: "console-config") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.214378 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.214411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.222661 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.223052 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt" (OuterVolumeSpecName: "kube-api-access-tgvmt") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "kube-api-access-tgvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.227504 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314517 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314573 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314594 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314611 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314628 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314647 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314662 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.382792 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.389563 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:31 crc kubenswrapper[4775]: I0123 14:18:31.725632 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" path="/var/lib/kubelet/pods/a6821f92-2d15-4dc0-92ed-7a30cef98db9/volumes" Jan 23 14:18:32 crc kubenswrapper[4775]: I0123 14:18:32.053714 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="41c36445db7844bf1524cc6bf76ea62e882c8f64c3b04b1bf6092f57c54b3805" exitCode=0 Jan 23 14:18:32 crc kubenswrapper[4775]: I0123 14:18:32.053763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"41c36445db7844bf1524cc6bf76ea62e882c8f64c3b04b1bf6092f57c54b3805"} Jan 23 14:18:33 crc kubenswrapper[4775]: I0123 14:18:33.066157 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="ce3c959548e46b225f00e83f12330d256bd0f985a80a48e96d43c7bc6cc4968a" exitCode=0 Jan 23 14:18:33 crc kubenswrapper[4775]: I0123 14:18:33.066260 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"ce3c959548e46b225f00e83f12330d256bd0f985a80a48e96d43c7bc6cc4968a"} Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.348314 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470316 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.471226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle" (OuterVolumeSpecName: "bundle") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.478127 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k" (OuterVolumeSpecName: "kube-api-access-vb88k") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "kube-api-access-vb88k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.497141 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util" (OuterVolumeSpecName: "util") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571326 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571358 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571367 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.082962 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7"} Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.083023 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7" Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.083130 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.424243 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425074 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="pull" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425090 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="pull" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425106 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425114 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425129 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="util" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425137 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="util" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425150 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425158 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425264 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425279 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425693 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428747 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428875 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428931 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.429790 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bfk62" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.430368 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.454553 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.583961 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.584028 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.584050 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.660793 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.661397 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.662674 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dj9rr" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.664088 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.666703 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.678544 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685255 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685279 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.690494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.690973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.704585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.743263 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786545 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786878 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891507 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891753 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.900423 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.908105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.911531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.973400 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:44 crc kubenswrapper[4775]: I0123 14:18:44.150943 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:44 crc kubenswrapper[4775]: I0123 14:18:44.213261 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:45 crc kubenswrapper[4775]: I0123 14:18:45.142468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" event={"ID":"838b952f-6d05-4955-82fd-9cf8a017c5b5","Type":"ContainerStarted","Data":"31d26ecf0593c7bd8ee008f0c003dd240a6c9e29631f2280aeaa76f92e9519eb"} Jan 23 14:18:45 crc kubenswrapper[4775]: I0123 14:18:45.144010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" event={"ID":"fa6cceac-c1d4-4e7c-9e60-4dd698abc182","Type":"ContainerStarted","Data":"8e61c6aed48d6e9729931df5cccc8e2c99bac20ba2cc4ed55f76afdd1451bc55"} Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.162520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" event={"ID":"838b952f-6d05-4955-82fd-9cf8a017c5b5","Type":"ContainerStarted","Data":"b1e1368cee8aa55ec36a56c7081530eab3e8dc7106c24939b70dfd6fc64fdf88"} Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.163232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.187776 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" podStartSLOduration=1.928141788 podStartE2EDuration="5.187754879s" podCreationTimestamp="2026-01-23 14:18:43 +0000 UTC" firstStartedPulling="2026-01-23 14:18:44.220649398 +0000 UTC m=+871.215478148" lastFinishedPulling="2026-01-23 14:18:47.480262499 +0000 UTC m=+874.475091239" observedRunningTime="2026-01-23 14:18:48.181916818 +0000 UTC m=+875.176745598" watchObservedRunningTime="2026-01-23 14:18:48.187754879 +0000 UTC m=+875.182583639" Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.175887 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" event={"ID":"fa6cceac-c1d4-4e7c-9e60-4dd698abc182","Type":"ContainerStarted","Data":"1f2020d5f4443d4280788cd115936a0c4526ce925c109f4db0f17392eeff8c07"} Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.177952 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.207765 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" podStartSLOduration=2.039282314 podStartE2EDuration="7.207734757s" podCreationTimestamp="2026-01-23 14:18:43 +0000 UTC" firstStartedPulling="2026-01-23 14:18:44.164241855 +0000 UTC m=+871.159070595" lastFinishedPulling="2026-01-23 14:18:49.332694288 +0000 UTC m=+876.327523038" observedRunningTime="2026-01-23 14:18:50.204338388 +0000 UTC m=+877.199167168" watchObservedRunningTime="2026-01-23 14:18:50.207734757 +0000 UTC m=+877.202563537" Jan 23 14:18:53 crc kubenswrapper[4775]: I0123 14:18:53.219073 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:18:53 crc kubenswrapper[4775]: I0123 14:18:53.219178 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:03 crc kubenswrapper[4775]: I0123 14:19:03.980054 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.219006 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.221507 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.747912 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.571900 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.572697 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.575145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.580242 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wxcj6" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.582664 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pv6fp"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.585426 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.590905 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.592174 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.595658 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.658971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659015 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659036 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659272 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659361 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.685470 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-x4gxj"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.686283 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.689727 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.690673 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.690743 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bdfk9" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.696152 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.716229 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.717119 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.718354 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.743507 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760857 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760926 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761002 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761045 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761059 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761085 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761115 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761147 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761186 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761213 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.761694 4775 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.761746 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert podName:9eb8e4c8-06ce-427a-9b91-7b77d4e8a783 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.261730386 +0000 UTC m=+912.256559126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert") pod "frr-k8s-webhook-server-7df86c4f6c-p49hv" (UID: "9eb8e4c8-06ce-427a-9b91-7b77d4e8a783") : secret "frr-k8s-webhook-server-cert" not found Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761787 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.762078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.765471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.766567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.772365 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.785284 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.786233 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.862918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863036 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863062 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863086 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863127 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863142 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863261 4775 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863313 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363297453 +0000 UTC m=+912.358126193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "speaker-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863475 4775 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863539 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs podName:7755c0c4-4e11-47c6-955d-453408fd4316 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363522309 +0000 UTC m=+912.358351049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs") pod "controller-6968d8fdc4-7qz58" (UID: "7755c0c4-4e11-47c6-955d-453408fd4316") : secret "controller-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863609 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863651 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363640993 +0000 UTC m=+912.358469833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "metallb-memberlist" not found Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.864149 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.881260 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.883931 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.886627 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.900923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.268739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.273989 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370569 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370611 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: E0123 14:19:25.371056 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 14:19:25 crc kubenswrapper[4775]: E0123 14:19:25.371223 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:26.37119872 +0000 UTC m=+913.366027480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "metallb-memberlist" not found Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.373837 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.377258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.419172 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"db1a01bc1ba1ee42d7e50bc0d9c3a1c450dee6ca84d4fbebd85aad6d42b30298"} Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.488677 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.629445 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.787891 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:25 crc kubenswrapper[4775]: W0123 14:19:25.807188 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb8e4c8_06ce_427a_9b91_7b77d4e8a783.slice/crio-43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574 WatchSource:0}: Error finding container 43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574: Status 404 returned error can't find the container with id 43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574 Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.871779 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:25 crc kubenswrapper[4775]: W0123 14:19:25.877093 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7755c0c4_4e11_47c6_955d_453408fd4316.slice/crio-17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794 WatchSource:0}: Error finding container 17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794: Status 404 returned error can't find the container with id 17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794 Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.388466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.397036 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.430584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"57f2c829861f1d8e95295d15874ad6927f022e9f4978d657a631c20112805825"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.430923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.431054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"5ba1d06968107c6c4878cb34ef755a33804579ee7891a376f65a795c0ac3484b"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.431187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.432442 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" event={"ID":"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783","Type":"ContainerStarted","Data":"43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.462004 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7qz58" podStartSLOduration=2.461977933 podStartE2EDuration="2.461977933s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:19:26.460000785 +0000 UTC m=+913.454829595" watchObservedRunningTime="2026-01-23 14:19:26.461977933 +0000 UTC m=+913.456806713" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.498511 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: W0123 14:19:26.525110 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9334cd3c_2410_4fbd_8cc1_14edca3afb92.slice/crio-ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e WatchSource:0}: Error finding container ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e: Status 404 returned error can't find the container with id ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.448791 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"156441b27aaaded54dcafdd02e2c5c5e6b18f47eede2e0d388a99d3496420beb"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449047 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"c7ab84ab93277513795d64aa01d22abe32a2e419638db5331534b03973fc7c0b"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449519 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.466495 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-x4gxj" podStartSLOduration=3.466472231 podStartE2EDuration="3.466472231s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:19:27.466004988 +0000 UTC m=+914.460833738" watchObservedRunningTime="2026-01-23 14:19:27.466472231 +0000 UTC m=+914.461300961" Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.498103 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="813a6d56c6670b6d99b6c9f72e927be05faf833d5b670b06bdeeb14e982e2169" exitCode=0 Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.498943 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"813a6d56c6670b6d99b6c9f72e927be05faf833d5b670b06bdeeb14e982e2169"} Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.503289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" event={"ID":"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783","Type":"ContainerStarted","Data":"aa9ece7b4d5c7e9fc0f0e34c63e6fd4fe03536eca6e6c15c90afa524847a9383"} Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.504367 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.583859 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" podStartSLOduration=2.487079212 podStartE2EDuration="9.583832015s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="2026-01-23 14:19:25.809739506 +0000 UTC m=+912.804568266" lastFinishedPulling="2026-01-23 14:19:32.906492289 +0000 UTC m=+919.901321069" observedRunningTime="2026-01-23 14:19:33.572549537 +0000 UTC m=+920.567378317" watchObservedRunningTime="2026-01-23 14:19:33.583832015 +0000 UTC m=+920.578660795" Jan 23 14:19:34 crc kubenswrapper[4775]: I0123 14:19:34.514588 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="b200717ccdf0687d46117014ac949cddbb385b39b3e47b3e49905f39327299d3" exitCode=0 Jan 23 14:19:34 crc kubenswrapper[4775]: I0123 14:19:34.514708 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"b200717ccdf0687d46117014ac949cddbb385b39b3e47b3e49905f39327299d3"} Jan 23 14:19:35 crc kubenswrapper[4775]: I0123 14:19:35.526481 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="8103a0949ce2e5c463436b105a88b8197cdfa3462ff1260aa4072482bb0bdc6b" exitCode=0 Jan 23 14:19:35 crc kubenswrapper[4775]: I0123 14:19:35.526584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"8103a0949ce2e5c463436b105a88b8197cdfa3462ff1260aa4072482bb0bdc6b"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.502223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.535543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"36a591050aa0b9f1202328585cc6e296ce83c49824afb9b8a1292713799beeec"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"2c82b926383a0c0d90388e4d5ab7b3327886e8351cea0e7edf651e99569d1ab6"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536417 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"41fbaebc39be36ee52c47d74e2888a476cfdf76962771d24db0cbbe01ff807bf"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"0f2727b0ee883208bba3dd5c587cde2751e71a8c5b72e97c0667a5a0905da7db"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536541 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"40956cc256fd2b5b2ded2fe0ed28aceb74acb9a74f12117920e4aadd8f45b915"} Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.548051 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"bf0c0d463faa2515d69243d5412f283f99fc1a34983ea09c208e0a76629d7c7e"} Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.548500 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.586838 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pv6fp" podStartSLOduration=5.761049517 podStartE2EDuration="13.586772632s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="2026-01-23 14:19:25.046621783 +0000 UTC m=+912.041450533" lastFinishedPulling="2026-01-23 14:19:32.872344898 +0000 UTC m=+919.867173648" observedRunningTime="2026-01-23 14:19:37.582543829 +0000 UTC m=+924.577372629" watchObservedRunningTime="2026-01-23 14:19:37.586772632 +0000 UTC m=+924.581601402" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.097502 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.099761 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.102303 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.144098 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.167934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.168137 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.168252 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269574 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269639 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.270742 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.270833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.308176 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.426769 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.724812 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.566917 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="a937d438a1a4270f6d3d40caa3a143bd0e86460a1e35413e1f358c0140018f34" exitCode=0 Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.566979 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"a937d438a1a4270f6d3d40caa3a143bd0e86460a1e35413e1f358c0140018f34"} Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.567056 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerStarted","Data":"4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a"} Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.901298 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.955225 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.439877 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.441228 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.450505 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561548 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561951 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663132 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663159 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663658 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.664588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.684746 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.777268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:43 crc kubenswrapper[4775]: I0123 14:19:43.746167 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.440984 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.442406 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.454964 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.487963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.488092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.488207 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.589377 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.589618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.599910 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f" exitCode=0 Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.599985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.600016 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerStarted","Data":"6d52c73638747afa5394f7ec7461317e60f2cd383c74573847591a6d789baafc"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.602522 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="b56c53dc9ee2a924fe03668f427ab41d5339a983cc8e557939a0d1ed0c78ddc5" exitCode=0 Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.602552 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"b56c53dc9ee2a924fe03668f427ab41d5339a983cc8e557939a0d1ed0c78ddc5"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.622930 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.812413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.303722 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:45 crc kubenswrapper[4775]: W0123 14:19:45.308620 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10fc232f_aecc_4d2b_9dd2_48723f0a0cd6.slice/crio-633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0 WatchSource:0}: Error finding container 633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0: Status 404 returned error can't find the container with id 633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.497717 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.608880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.609343 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6" exitCode=0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.609591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerStarted","Data":"633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.612887 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="cdb92bb89e05f5403a9d650767375bffbbaa6c149b86380481f9447fb457144b" exitCode=0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.612969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"cdb92bb89e05f5403a9d650767375bffbbaa6c149b86380481f9447fb457144b"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.633604 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.618867 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837" exitCode=0 Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.618969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837"} Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.936822 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119173 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119321 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119424 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.120898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle" (OuterVolumeSpecName: "bundle") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.132093 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc" (OuterVolumeSpecName: "kube-api-access-fqxxc") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "kube-api-access-fqxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.135190 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util" (OuterVolumeSpecName: "util") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222195 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222478 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222491 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627904 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627990 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.631623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerStarted","Data":"b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.634716 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156" exitCode=0 Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.634761 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.656725 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qngpp" podStartSLOduration=2.847736511 podStartE2EDuration="5.656705411s" podCreationTimestamp="2026-01-23 14:19:42 +0000 UTC" firstStartedPulling="2026-01-23 14:19:44.601857906 +0000 UTC m=+931.596686646" lastFinishedPulling="2026-01-23 14:19:47.410826796 +0000 UTC m=+934.405655546" observedRunningTime="2026-01-23 14:19:47.654764895 +0000 UTC m=+934.649593635" watchObservedRunningTime="2026-01-23 14:19:47.656705411 +0000 UTC m=+934.651534161" Jan 23 14:19:48 crc kubenswrapper[4775]: I0123 14:19:48.643060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerStarted","Data":"dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa"} Jan 23 14:19:48 crc kubenswrapper[4775]: I0123 14:19:48.677070 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-28swh" podStartSLOduration=2.237197011 podStartE2EDuration="4.6770544s" podCreationTimestamp="2026-01-23 14:19:44 +0000 UTC" firstStartedPulling="2026-01-23 14:19:45.609866837 +0000 UTC m=+932.604695577" lastFinishedPulling="2026-01-23 14:19:48.049724186 +0000 UTC m=+935.044552966" observedRunningTime="2026-01-23 14:19:48.67395411 +0000 UTC m=+935.668782860" watchObservedRunningTime="2026-01-23 14:19:48.6770544 +0000 UTC m=+935.671883140" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.444527 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445042 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445056 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445077 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="pull" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445085 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="pull" Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445104 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="util" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="util" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445250 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.447658 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bmqvs" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.448256 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.454204 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.460781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.487687 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.487735 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589131 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589192 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589707 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.613857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.759842 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.013698 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:52 crc kubenswrapper[4775]: W0123 14:19:52.023741 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665532a6_49a8_4928_b5e1_909ac58bf7e8.slice/crio-d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59 WatchSource:0}: Error finding container d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59: Status 404 returned error can't find the container with id d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59 Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.670253 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" event={"ID":"665532a6-49a8-4928-b5e1-909ac58bf7e8","Type":"ContainerStarted","Data":"d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59"} Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.777877 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.778430 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.847955 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219185 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219259 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219323 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.220230 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.220330 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" gracePeriod=600 Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.680948 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" exitCode=0 Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.681067 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.681170 4775 scope.go:117] "RemoveContainer" containerID="815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.738166 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.695076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.813222 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.813495 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.865586 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.904139 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:55 crc kubenswrapper[4775]: I0123 14:19:55.781858 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:56 crc kubenswrapper[4775]: I0123 14:19:56.826914 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:56 crc kubenswrapper[4775]: I0123 14:19:56.827116 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qngpp" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" containerID="cri-o://b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" gracePeriod=2 Jan 23 14:19:57 crc kubenswrapper[4775]: I0123 14:19:57.722884 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" exitCode=0 Jan 23 14:19:57 crc kubenswrapper[4775]: I0123 14:19:57.723998 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f"} Jan 23 14:19:58 crc kubenswrapper[4775]: I0123 14:19:58.435717 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:58 crc kubenswrapper[4775]: I0123 14:19:58.730441 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-28swh" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" containerID="cri-o://dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" gracePeriod=2 Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.661496 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.726926 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.726995 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.727039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.727924 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities" (OuterVolumeSpecName: "utilities") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.733161 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm" (OuterVolumeSpecName: "kube-api-access-cszrm") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "kube-api-access-cszrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.739990 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.742126 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" exitCode=0 Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.764945 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"6d52c73638747afa5394f7ec7461317e60f2cd383c74573847591a6d789baafc"} Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.765005 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa"} Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.765026 4775 scope.go:117] "RemoveContainer" containerID="b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.780050 4775 scope.go:117] "RemoveContainer" containerID="8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.785831 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.803037 4775 scope.go:117] "RemoveContainer" containerID="554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828788 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828951 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828961 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.921991 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031251 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031527 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.032781 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities" (OuterVolumeSpecName: "utilities") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.037272 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6" (OuterVolumeSpecName: "kube-api-access-cl9q6") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "kube-api-access-cl9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.075913 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.083458 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.125776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133413 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133453 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133470 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.751567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" event={"ID":"665532a6-49a8-4928-b5e1-909ac58bf7e8","Type":"ContainerStarted","Data":"a13fe513a735533b95506eebc589bb4bfb9fa48cb49c9a4919b7f7ce1307f660"} Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0"} Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755576 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755728 4775 scope.go:117] "RemoveContainer" containerID="dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.786188 4775 scope.go:117] "RemoveContainer" containerID="b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.802864 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" podStartSLOduration=2.144473056 podStartE2EDuration="9.802786205s" podCreationTimestamp="2026-01-23 14:19:51 +0000 UTC" firstStartedPulling="2026-01-23 14:19:52.027543675 +0000 UTC m=+939.022372415" lastFinishedPulling="2026-01-23 14:19:59.685856814 +0000 UTC m=+946.680685564" observedRunningTime="2026-01-23 14:20:00.775950836 +0000 UTC m=+947.770779586" watchObservedRunningTime="2026-01-23 14:20:00.802786205 +0000 UTC m=+947.797614985" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.809855 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.817054 4775 scope.go:117] "RemoveContainer" containerID="734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.822564 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:20:01 crc kubenswrapper[4775]: I0123 14:20:01.731589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" path="/var/lib/kubelet/pods/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63/volumes" Jan 23 14:20:01 crc kubenswrapper[4775]: I0123 14:20:01.734084 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" path="/var/lib/kubelet/pods/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6/volumes" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899024 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899514 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899529 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899546 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899554 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899564 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899575 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899586 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899594 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899607 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899614 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899632 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899639 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899757 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899780 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.900223 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902017 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902127 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902452 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cv6mh" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.917107 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.985031 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.985077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.086531 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.086590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.108761 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.113605 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.216072 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.445254 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.778178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" event={"ID":"3613a1b4-54b6-4a47-988a-a6624d530636","Type":"ContainerStarted","Data":"f1681bff1fd4cb62e3b9069e3a92f32f775c56a2291e72d7edc490a941292dfe"} Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.754437 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.756044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.761890 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.762905 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b4sr7" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.827285 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.827346 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.928621 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.928732 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.965108 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.966102 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.081482 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.507893 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:07 crc kubenswrapper[4775]: W0123 14:20:07.520077 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620134d3_d230_4c5b_8aaf_4213bcba307c.slice/crio-9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246 WatchSource:0}: Error finding container 9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246: Status 404 returned error can't find the container with id 9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246 Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.804901 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" event={"ID":"620134d3-d230-4c5b-8aaf-4213bcba307c","Type":"ContainerStarted","Data":"9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246"} Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.843776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.845385 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.875496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022243 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022376 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.123918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.123985 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124924 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.146615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.181200 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:14 crc kubenswrapper[4775]: W0123 14:20:14.538966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod692e7969_32d8_473a_8e1c_22122a398b6b.slice/crio-a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02 WatchSource:0}: Error finding container a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02: Status 404 returned error can't find the container with id a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02 Jan 23 14:20:14 crc kubenswrapper[4775]: I0123 14:20:14.539876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.542567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" event={"ID":"620134d3-d230-4c5b-8aaf-4213bcba307c","Type":"ContainerStarted","Data":"541e19b03608dbc973d3ea2fbd9e8d4dbbc13932ecd7b2503495e90b9d542c52"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544751 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" exitCode=0 Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544812 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerStarted","Data":"a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.546880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" event={"ID":"3613a1b4-54b6-4a47-988a-a6624d530636","Type":"ContainerStarted","Data":"0d3d04e71da29033a048f89ed0f44fbfe0349d6eb20f86e4b597408dc16f9b20"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.547026 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.562125 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" podStartSLOduration=3.639717895 podStartE2EDuration="9.56210854s" podCreationTimestamp="2026-01-23 14:20:06 +0000 UTC" firstStartedPulling="2026-01-23 14:20:07.522184229 +0000 UTC m=+954.517012969" lastFinishedPulling="2026-01-23 14:20:13.444574874 +0000 UTC m=+960.439403614" observedRunningTime="2026-01-23 14:20:15.560310358 +0000 UTC m=+962.555139098" watchObservedRunningTime="2026-01-23 14:20:15.56210854 +0000 UTC m=+962.556937280" Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.605385 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" podStartSLOduration=3.569814622 podStartE2EDuration="12.605367175s" podCreationTimestamp="2026-01-23 14:20:03 +0000 UTC" firstStartedPulling="2026-01-23 14:20:04.454445049 +0000 UTC m=+951.449273799" lastFinishedPulling="2026-01-23 14:20:13.489997612 +0000 UTC m=+960.484826352" observedRunningTime="2026-01-23 14:20:15.604986414 +0000 UTC m=+962.599815154" watchObservedRunningTime="2026-01-23 14:20:15.605367175 +0000 UTC m=+962.600195915" Jan 23 14:20:16 crc kubenswrapper[4775]: I0123 14:20:16.554776 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" exitCode=0 Jan 23 14:20:16 crc kubenswrapper[4775]: I0123 14:20:16.554898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d"} Jan 23 14:20:17 crc kubenswrapper[4775]: I0123 14:20:17.562458 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerStarted","Data":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} Jan 23 14:20:17 crc kubenswrapper[4775]: I0123 14:20:17.597345 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnjc2" podStartSLOduration=5.085140578 podStartE2EDuration="6.597325429s" podCreationTimestamp="2026-01-23 14:20:11 +0000 UTC" firstStartedPulling="2026-01-23 14:20:15.546653522 +0000 UTC m=+962.541482262" lastFinishedPulling="2026-01-23 14:20:17.058838363 +0000 UTC m=+964.053667113" observedRunningTime="2026-01-23 14:20:17.595420883 +0000 UTC m=+964.590249623" watchObservedRunningTime="2026-01-23 14:20:17.597325429 +0000 UTC m=+964.592154169" Jan 23 14:20:19 crc kubenswrapper[4775]: I0123 14:20:19.219667 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.830547 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.832639 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.837329 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2tmb2" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.843453 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.004119 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.004169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.105703 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.105790 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.130169 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.130397 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.164934 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.182359 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.182394 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.243169 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.575299 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.608428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" event={"ID":"2a26d984-5abe-44ce-ad1e-25842b8f7e51","Type":"ContainerStarted","Data":"76608cd634deb86296a1f8b61789982cb5efa5a729f506ad2806b80067d1bca1"} Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.670320 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.724716 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:23 crc kubenswrapper[4775]: I0123 14:20:23.619420 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" event={"ID":"2a26d984-5abe-44ce-ad1e-25842b8f7e51","Type":"ContainerStarted","Data":"768b510429bf3bf9266ad9b99d279d4dee3e9b1c53590925f4b62ff38ebf5de8"} Jan 23 14:20:23 crc kubenswrapper[4775]: I0123 14:20:23.653786 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" podStartSLOduration=2.653755204 podStartE2EDuration="2.653755204s" podCreationTimestamp="2026-01-23 14:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:20:23.640846279 +0000 UTC m=+970.635675019" watchObservedRunningTime="2026-01-23 14:20:23.653755204 +0000 UTC m=+970.648583974" Jan 23 14:20:24 crc kubenswrapper[4775]: I0123 14:20:24.626409 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnjc2" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" containerID="cri-o://143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" gracePeriod=2 Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.069901 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250623 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250723 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.252534 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities" (OuterVolumeSpecName: "utilities") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.262079 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz" (OuterVolumeSpecName: "kube-api-access-b6xrz") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "kube-api-access-b6xrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.294226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352736 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352789 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352858 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633285 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" exitCode=0 Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633323 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633352 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02"} Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633351 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633429 4775 scope.go:117] "RemoveContainer" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.649155 4775 scope.go:117] "RemoveContainer" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.659340 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.669728 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.677511 4775 scope.go:117] "RemoveContainer" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.690622 4775 scope.go:117] "RemoveContainer" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691159 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": container with ID starting with 143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac not found: ID does not exist" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691188 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} err="failed to get container status \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": rpc error: code = NotFound desc = could not find container \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": container with ID starting with 143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691212 4775 scope.go:117] "RemoveContainer" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691499 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": container with ID starting with 38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d not found: ID does not exist" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691552 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d"} err="failed to get container status \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": rpc error: code = NotFound desc = could not find container \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": container with ID starting with 38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691587 4775 scope.go:117] "RemoveContainer" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691930 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": container with ID starting with 3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247 not found: ID does not exist" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691960 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247"} err="failed to get container status \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": rpc error: code = NotFound desc = could not find container \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": container with ID starting with 3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247 not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.722262 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" path="/var/lib/kubelet/pods/692e7969-32d8-473a-8e1c-22122a398b6b/volumes" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780069 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780700 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-utilities" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780720 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-utilities" Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780748 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-content" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780761 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-content" Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780776 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780788 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.781000 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.781724 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.791434 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nht2h" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.791599 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.792039 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.803375 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.864762 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.966634 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.984403 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.128425 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.399589 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:33 crc kubenswrapper[4775]: W0123 14:20:33.412997 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d6c5ef_17e4_48d8_ab0c_d5909563e217.slice/crio-9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b WatchSource:0}: Error finding container 9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b: Status 404 returned error can't find the container with id 9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.689104 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerStarted","Data":"9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b"} Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.350994 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.714934 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zsmtn" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" containerID="cri-o://56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" gracePeriod=2 Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.720464 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerStarted","Data":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.964296 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zsmtn" podStartSLOduration=2.025417068 podStartE2EDuration="5.964268246s" podCreationTimestamp="2026-01-23 14:20:32 +0000 UTC" firstStartedPulling="2026-01-23 14:20:33.415199691 +0000 UTC m=+980.410028431" lastFinishedPulling="2026-01-23 14:20:37.354050869 +0000 UTC m=+984.348879609" observedRunningTime="2026-01-23 14:20:37.738971378 +0000 UTC m=+984.733800118" watchObservedRunningTime="2026-01-23 14:20:37.964268246 +0000 UTC m=+984.959097026" Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.972371 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.974025 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.984721 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.130372 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.167693 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.231991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.251959 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.306591 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.333176 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.338328 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm" (OuterVolumeSpecName: "kube-api-access-stljm") pod "24d6c5ef-17e4-48d8-ab0c-d5909563e217" (UID: "24d6c5ef-17e4-48d8-ab0c-d5909563e217"). InnerVolumeSpecName "kube-api-access-stljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.434686 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.547557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:38 crc kubenswrapper[4775]: W0123 14:20:38.550966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ddc210_ca29_42e4_a4c2_a07881434fed.slice/crio-9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2 WatchSource:0}: Error finding container 9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2: Status 404 returned error can't find the container with id 9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2 Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726597 4775 generic.go:334] "Generic (PLEG): container finished" podID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" exitCode=0 Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerDied","Data":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerDied","Data":"9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726727 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726860 4775 scope.go:117] "RemoveContainer" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.731900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5czdz" event={"ID":"a0ddc210-ca29-42e4-a4c2-a07881434fed","Type":"ContainerStarted","Data":"9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.759847 4775 scope.go:117] "RemoveContainer" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: E0123 14:20:38.760469 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": container with ID starting with 56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d not found: ID does not exist" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.760505 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} err="failed to get container status \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": rpc error: code = NotFound desc = could not find container \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": container with ID starting with 56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d not found: ID does not exist" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.787462 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.797745 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.726619 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" path="/var/lib/kubelet/pods/24d6c5ef-17e4-48d8-ab0c-d5909563e217/volumes" Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.748149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5czdz" event={"ID":"a0ddc210-ca29-42e4-a4c2-a07881434fed","Type":"ContainerStarted","Data":"84b1d07b4bf4f3802ebc525ff6cc420874569362f03db8681190e265d36844f9"} Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.776161 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5czdz" podStartSLOduration=2.716825343 podStartE2EDuration="2.776140464s" podCreationTimestamp="2026-01-23 14:20:37 +0000 UTC" firstStartedPulling="2026-01-23 14:20:38.554654678 +0000 UTC m=+985.549483418" lastFinishedPulling="2026-01-23 14:20:38.613969799 +0000 UTC m=+985.608798539" observedRunningTime="2026-01-23 14:20:39.76875237 +0000 UTC m=+986.763581110" watchObservedRunningTime="2026-01-23 14:20:39.776140464 +0000 UTC m=+986.770969214" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.307144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.307850 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.345148 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.847183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.124758 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: E0123 14:20:55.126038 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.126072 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.126374 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.128535 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.131966 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nklzs" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.135391 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211130 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312755 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312838 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.313193 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.313230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.329912 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.459878 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.737497 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.883356 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerStarted","Data":"86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb"} Jan 23 14:20:56 crc kubenswrapper[4775]: I0123 14:20:56.894143 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="748926d979307840df862b3e8f8b2a8902ec143e560c1137489f7bd552663379" exitCode=0 Jan 23 14:20:56 crc kubenswrapper[4775]: I0123 14:20:56.894228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"748926d979307840df862b3e8f8b2a8902ec143e560c1137489f7bd552663379"} Jan 23 14:20:57 crc kubenswrapper[4775]: I0123 14:20:57.903616 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="90697243af1d8793a0de6b6bb13bd408648462fe0026e927bf20ccdddfadab9e" exitCode=0 Jan 23 14:20:57 crc kubenswrapper[4775]: I0123 14:20:57.904010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"90697243af1d8793a0de6b6bb13bd408648462fe0026e927bf20ccdddfadab9e"} Jan 23 14:20:58 crc kubenswrapper[4775]: I0123 14:20:58.915645 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="e7f9c451ab98994cd8a9da6ff9f29bbbd0ecc9fe81daeede319592422077c4c5" exitCode=0 Jan 23 14:20:58 crc kubenswrapper[4775]: I0123 14:20:58.915762 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"e7f9c451ab98994cd8a9da6ff9f29bbbd0ecc9fe81daeede319592422077c4c5"} Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.276368 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391706 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391883 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391949 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.393619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle" (OuterVolumeSpecName: "bundle") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.400531 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc" (OuterVolumeSpecName: "kube-api-access-plqcc") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "kube-api-access-plqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.425797 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util" (OuterVolumeSpecName: "util") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493511 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493647 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493697 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.938662 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb"} Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.939098 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.938764 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.826062 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827448 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="pull" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="pull" Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827511 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="util" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827530 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="util" Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827562 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827580 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827877 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.828747 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.830858 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nbgb2" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.854858 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.008465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.109819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.148880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.150568 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.687542 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.995523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerStarted","Data":"9226d2ede7beb9208ad931c1d54e8ae0eea8cc9501e5c82efcf4ccfa1586382e"} Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.051254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerStarted","Data":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.052106 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.101662 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" podStartSLOduration=2.707238648 podStartE2EDuration="7.101642534s" podCreationTimestamp="2026-01-23 14:21:07 +0000 UTC" firstStartedPulling="2026-01-23 14:21:08.704331155 +0000 UTC m=+1015.699159925" lastFinishedPulling="2026-01-23 14:21:13.098735061 +0000 UTC m=+1020.093563811" observedRunningTime="2026-01-23 14:21:14.089559853 +0000 UTC m=+1021.084388623" watchObservedRunningTime="2026-01-23 14:21:14.101642534 +0000 UTC m=+1021.096471284" Jan 23 14:21:18 crc kubenswrapper[4775]: I0123 14:21:18.154793 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.269305 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.270766 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.272681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mdshv" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.280032 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.281630 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.283692 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9cst5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.287944 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.291963 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.307596 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.314337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.321211 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g8vfs" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.334955 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.358664 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.380551 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.381280 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.382934 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6nb6c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.385148 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.385699 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.388395 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cp4vx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392296 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392384 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392414 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.405852 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.410692 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.425936 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.426698 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.429281 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rgcdb" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.437521 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.452871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.455913 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.456603 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.461352 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.461594 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jgkdm" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.467206 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.467922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.471408 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nljzc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.484141 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493782 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493831 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493892 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493940 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493981 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.494000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.500675 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.524042 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.524792 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.526243 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.526726 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-db6n4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.532278 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.553726 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.554552 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.558105 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mbbh9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.570105 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.582865 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.589678 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.590546 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.592528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-99hm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599148 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599207 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599237 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599279 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599397 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.600692 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.600727 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.100713107 +0000 UTC m=+1047.095541847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.605215 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.614678 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.615496 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.617975 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mgtgs" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.620243 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.625647 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.626737 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.630163 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mh2wz" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.630477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.636151 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.636653 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.641728 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.643045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.645410 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.647707 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.660325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.691040 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.691793 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.696173 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.697382 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dg9zv" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700894 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700928 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.720415 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.721504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.722631 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.733537 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.744847 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.754664 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.758547 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ktsgf" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.762055 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.785016 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.804394 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806596 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806650 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806773 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809625 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811365 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-c4pmt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811371 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811558 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812278 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812649 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812670 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812681 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.813021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.813281 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815028 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lswqj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bjttx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815714 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rgpzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.819043 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.829462 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.832196 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.833266 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.840436 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.846020 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.847091 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.854552 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vfzg6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.867564 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.883548 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.898459 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.908648 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.910319 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.913225 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7jvgx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.913608 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914278 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914312 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914380 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914421 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.914685 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.914739 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.414724441 +0000 UTC m=+1047.409553181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.940509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.947545 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.994415 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.995478 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999196 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999412 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fsvsf" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999523 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.017608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018724 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018865 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018969 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.019046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.019123 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.036140 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.036944 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.046206 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.087450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.087941 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.088363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.103669 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.106004 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.115322 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.127274 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-56whh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.131267 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134386 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134433 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134467 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134566 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.134721 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.134766 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.134749363 +0000 UTC m=+1048.129578093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.138082 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.157006 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.193348 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.193876 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.203593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.234561 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235166 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235495 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235659 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.235914 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236251 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.736114258 +0000 UTC m=+1047.730942998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236402 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236490 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.736481779 +0000 UTC m=+1047.731310519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.260226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.284592 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: W0123 14:21:40.336185 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ee00d0_c0f0_442a_bf4a_7335b62c1c4e.slice/crio-afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39 WatchSource:0}: Error finding container afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39: Status 404 returned error can't find the container with id afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39 Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.336440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.358534 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.431648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.437767 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.437979 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.438025 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.438010445 +0000 UTC m=+1048.432839185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.451598 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.475066 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.484164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.551196 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.665218 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.679903 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.682934 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.750786 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.750884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.750987 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751027 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.75102321 +0000 UTC m=+1048.745851950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.751058631 +0000 UTC m=+1048.745887361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.869981 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.877323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:40 crc kubenswrapper[4775]: W0123 14:21:40.879155 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6ce8ae_8d3f_4988_9386_6a20487f8ae9.slice/crio-4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e WatchSource:0}: Error finding container 4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e: Status 404 returned error can't find the container with id 4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.887984 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.066568 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.089454 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.098734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.098937 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9710b785_e422_4aca_88e8_e88d26d4e724.slice/crio-999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2 WatchSource:0}: Error finding container 999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2: Status 404 returned error can't find the container with id 999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.103940 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.156917 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.157163 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.157259 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.157234323 +0000 UTC m=+1050.152063063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.237727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.240228 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.265669 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96qrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-v8dw9_openstack-operators(272dcd84-1bb6-42cb-8c8e-6851f9f031de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.267108 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.269108 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.274404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" event={"ID":"841fb528-61a8-445e-a135-be26295bc975","Type":"ContainerStarted","Data":"db229de7b8a6238a98edaf9bf26b33f4e350ab7fb3bb77c280ee1da12f4c6a0c"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.276899 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.276923 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" event={"ID":"352223d5-fa0a-43df-8bad-0eaa9b6b439d","Type":"ContainerStarted","Data":"e7a2b0e0a45bac63400217ceb07fdc94a77e510171269f4ff2a05df993bbb5ac"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.280158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" event={"ID":"ecef6080-ea2c-43f4-8ffa-da2ceb59369d","Type":"ContainerStarted","Data":"129ee5e5da3d8ef7d68f73ab6068a7925151a9cb63fa839879397f449acc7e9b"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.281123 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.281537 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" event={"ID":"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e","Type":"ContainerStarted","Data":"afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.284037 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" event={"ID":"0784c928-e0c5-4afb-99cb-4f1f96820a14","Type":"ContainerStarted","Data":"958fec355ec2e927879ead1cf096153deae15a96fe02055adf6702c4956f8c4c"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.284820 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.286248 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-n4k5s_openstack-operators(072b9a9d-8a08-454c-b1b6-628fcdcc91df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.286877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" event={"ID":"853c6152-25bf-4374-a941-f9cd4202c87f","Type":"ContainerStarted","Data":"fef82658e900a2f6a85823adcd2016a932b07053cbc2df5d3a903112c2e396ad"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.287037 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmtrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-xst4r_openstack-operators(3d7c7bc6-5124-4cd4-a406-448ca94ba640): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.287452 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.287702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" event={"ID":"64bae0eb-d703-4058-a545-b42d62045b90","Type":"ContainerStarted","Data":"a46114bb2373d705985767a513ff41fdc1f93b36ba78974b9d5075f550ed10e2"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.288138 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.288675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" event={"ID":"9ce79c2a-2c52-48de-80a6-887d592578d3","Type":"ContainerStarted","Data":"cdee4c6c9c1415ea8b74f300719a5a9b250b0b993d77e077cc6df21c62092136"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.289554 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" event={"ID":"d9e69fcf-58c9-45fe-a291-4628c8219e10","Type":"ContainerStarted","Data":"0f672fce2a89dedd21cdfb294c8a56e8ee9bf30c43ab00ad718c95b3f67c6829"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.290263 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" event={"ID":"9710b785-e422-4aca-88e8-e88d26d4e724","Type":"ContainerStarted","Data":"999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.291101 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" event={"ID":"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9","Type":"ContainerStarted","Data":"4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.291792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerStarted","Data":"3b31a7012ea48421023dcf9b284625ce3e8507aa2773ce103b29a5ca80ded146"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.293801 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" event={"ID":"91da96b4-921a-4b88-9804-55745989e08b","Type":"ContainerStarted","Data":"b768501f2b234575b918947ca54de948aadb0d4b42b85fe4e56f4c86accc286b"} Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.294966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9597bf_12a1_4204_ac57_37c4c0189687.slice/crio-cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545 WatchSource:0}: Error finding container cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545: Status 404 returned error can't find the container with id cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.295157 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" event={"ID":"d98bebb2-a42a-45a6-b452-a82ce1f62896","Type":"ContainerStarted","Data":"b51d02d91198b6b29ea8129d6fa27afd58f540ccf1eb03aff6c21935fd28f0ce"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.296707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" event={"ID":"272dcd84-1bb6-42cb-8c8e-6851f9f031de","Type":"ContainerStarted","Data":"4c6e90c804a052d4f5b2b0202990e850b6a6f56a6d4f3819524b4b0d210b287c"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.297834 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lj998,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xtmz8_openstack-operators(9f9597bf-12a1-4204-ac57-37c4c0189687): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.299166 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.299176 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07598ff_60cc_482e_a551_af751575709c.slice/crio-5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41 WatchSource:0}: Error finding container 5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41: Status 404 returned error can't find the container with id 5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41 Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.299229 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.302522 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frhxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-vl7m5_openstack-operators(a07598ff-60cc-482e-a551-af751575709c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.303847 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.374117 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.379090 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9da51f1_a035_44b8_9391_0d6018a84c61.slice/crio-e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052 WatchSource:0}: Error finding container e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052: Status 404 returned error can't find the container with id e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.463826 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.464018 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.464070 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.464055667 +0000 UTC m=+1050.458884407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.770347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.770539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.770719 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.770776 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.770760669 +0000 UTC m=+1050.765589409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.771091 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.771123 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.77111545 +0000 UTC m=+1050.765944190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.307819 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" event={"ID":"3d7c7bc6-5124-4cd4-a406-448ca94ba640","Type":"ContainerStarted","Data":"bd9242543a7f5b2dc5838441799986fb353563af73ab096522e1bbd88214b2f2"} Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.309240 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.320188 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" event={"ID":"a07598ff-60cc-482e-a551-af751575709c","Type":"ContainerStarted","Data":"5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.330276 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" event={"ID":"f9da51f1-a035-44b8-9391-0d6018a84c61","Type":"ContainerStarted","Data":"e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.343787 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" event={"ID":"9f9597bf-12a1-4204-ac57-37c4c0189687","Type":"ContainerStarted","Data":"cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.346682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" event={"ID":"072b9a9d-8a08-454c-b1b6-628fcdcc91df","Type":"ContainerStarted","Data":"5ebd6b37eb25b61b551dc8eb0bd3c831f4061873c635f732fe2b1f83b21bbc42"} Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351008 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351197 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351271 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351306 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.191544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.191729 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.191817 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.191787041 +0000 UTC m=+1054.186615771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.365845 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.365951 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.366028 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.366108 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.495557 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.495703 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.495750 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.495735733 +0000 UTC m=+1054.490564473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.802478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.802634 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.802662 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.802745 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.802727714 +0000 UTC m=+1054.797556454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.803355 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.803433 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.803417224 +0000 UTC m=+1054.798245964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.254720 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.254903 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.255652 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.255629167 +0000 UTC m=+1062.250457917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.568720 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.569165 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.569211 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.569198527 +0000 UTC m=+1062.564027267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.876199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.876301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876427 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876441 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876478 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.876465075 +0000 UTC m=+1062.871293815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876564 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.876543778 +0000 UTC m=+1062.871372518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.492444 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.492998 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-87fmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-jrhlh_openstack-operators(91da96b4-921a-4b88-9804-55745989e08b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.494853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podUID="91da96b4-921a-4b88-9804-55745989e08b" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.286742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.295045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.402081 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.454419 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podUID="91da96b4-921a-4b88-9804-55745989e08b" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.590467 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.595573 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.622484 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.622534 4775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.624367 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gskfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-d9495b985-k98mk_openstack-operators(9bad88d6-5ca9-4176-904d-72b793e1361e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.625669 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.778640 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.896154 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.896284 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896419 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896515 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896539 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:22:11.896511528 +0000 UTC m=+1078.891340298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896673 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:22:11.896629071 +0000 UTC m=+1078.891457831 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.216350 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.216552 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bgbpj_openstack-operators(0784c928-e0c5-4afb-99cb-4f1f96820a14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.217759 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podUID="0784c928-e0c5-4afb-99cb-4f1f96820a14" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.461199 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podUID="0784c928-e0c5-4afb-99cb-4f1f96820a14" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.462858 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20\\\"\"" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.918576 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.919410 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66h6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2lhsf_openstack-operators(f9da51f1-a035-44b8-9391-0d6018a84c61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.920590 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podUID="f9da51f1-a035-44b8-9391-0d6018a84c61" Jan 23 14:21:57 crc kubenswrapper[4775]: E0123 14:21:57.465516 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podUID="f9da51f1-a035-44b8-9391-0d6018a84c61" Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.023833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.135467 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.489249 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" event={"ID":"9ce79c2a-2c52-48de-80a6-887d592578d3","Type":"ContainerStarted","Data":"db2f2688bae5a6164e4165906cb369018cb9bf1f1fec4f02a36b65d09715a616"} Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.489422 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.506452 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" podStartSLOduration=2.270703579 podStartE2EDuration="19.506440269s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.521637747 +0000 UTC m=+1047.516466487" lastFinishedPulling="2026-01-23 14:21:57.757374437 +0000 UTC m=+1064.752203177" observedRunningTime="2026-01-23 14:21:58.501155056 +0000 UTC m=+1065.495983796" watchObservedRunningTime="2026-01-23 14:21:58.506440269 +0000 UTC m=+1065.501269009" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.502044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" event={"ID":"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e","Type":"ContainerStarted","Data":"9330b422d03f293059c950c7db15fdd3f7d2cc166a990c773a43896fc66171fa"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.502314 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.504343 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" event={"ID":"9710b785-e422-4aca-88e8-e88d26d4e724","Type":"ContainerStarted","Data":"535ef816c904367f450fc1654c575b1abb5d35264b246bfd070d4c2f8d7d5844"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.504449 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.506516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" event={"ID":"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9","Type":"ContainerStarted","Data":"88519d0f3908ad731c3a7968295ae958359de101c7157acb43cc04342089d0a2"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.506610 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.509335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" event={"ID":"352223d5-fa0a-43df-8bad-0eaa9b6b439d","Type":"ContainerStarted","Data":"fdf6f94fd5f0c6b8722cedfc75f36c5410139682daa13027b10a211eaa2745a9"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.509485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.524561 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" event={"ID":"d98bebb2-a42a-45a6-b452-a82ce1f62896","Type":"ContainerStarted","Data":"14ea6b409e153fe002358dd5bfe9cdc2dc004f11abb83eda5d7d78dcae47afb4"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.524633 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.527937 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" podStartSLOduration=4.685761987 podStartE2EDuration="20.52792147s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.341438348 +0000 UTC m=+1047.336267088" lastFinishedPulling="2026-01-23 14:21:56.183597821 +0000 UTC m=+1063.178426571" observedRunningTime="2026-01-23 14:21:59.521060681 +0000 UTC m=+1066.515889421" watchObservedRunningTime="2026-01-23 14:21:59.52792147 +0000 UTC m=+1066.522750210" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.533734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" event={"ID":"64bae0eb-d703-4058-a545-b42d62045b90","Type":"ContainerStarted","Data":"294ee6684d9c624a8b330cef15053321999f775702b0c0f9705aa37e9b0baf09"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.534013 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.547784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" event={"ID":"ecef6080-ea2c-43f4-8ffa-da2ceb59369d","Type":"ContainerStarted","Data":"7227dac76c0563e3c738a508fd7da403d5acc1558b68845afcb501cb6b3d3ef6"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.548600 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.548877 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" podStartSLOduration=3.89476698 podStartE2EDuration="20.548866596s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.104244529 +0000 UTC m=+1048.099073269" lastFinishedPulling="2026-01-23 14:21:57.758344145 +0000 UTC m=+1064.753172885" observedRunningTime="2026-01-23 14:21:59.546674163 +0000 UTC m=+1066.541502903" watchObservedRunningTime="2026-01-23 14:21:59.548866596 +0000 UTC m=+1066.543695336" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.558954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" event={"ID":"d9e69fcf-58c9-45fe-a291-4628c8219e10","Type":"ContainerStarted","Data":"9e71fa10f945cc09c631809e9d89bebe28368cdb354be49f777531c649fc5ae0"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.559205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.565187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" event={"ID":"5a65a9ef-28c7-46ae-826d-5546af1103a5","Type":"ContainerStarted","Data":"8ba5118e73f150a190f2c88b50539b48e57c3027e70e8fcb074ab8b45a5f964c"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.583438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" event={"ID":"272dcd84-1bb6-42cb-8c8e-6851f9f031de","Type":"ContainerStarted","Data":"3f320b2468e700d95def8fd2495ca104ef38317f762aa0858e061410de74c51c"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.584232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.585368 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" event={"ID":"841fb528-61a8-445e-a135-be26295bc975","Type":"ContainerStarted","Data":"99b663305e7965ad563cb9d0cdc6187333cf27cd90c3f22189c586efc3c3b6ba"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.585537 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.586441 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" event={"ID":"853c6152-25bf-4374-a941-f9cd4202c87f","Type":"ContainerStarted","Data":"9df83a8634e90cbf428f3981b67ea7ef5b1edc562176e8a76bf691023f48a202"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.586867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.594292 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" event={"ID":"44a963d8-d403-42d5-acd2-a0379f07db51","Type":"ContainerStarted","Data":"b322d6a71f52beec10b6d0e0dd450c9ec4edd88a9e4b64ae89a7ca4b30b46405"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.631554 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" podStartSLOduration=5.046816142 podStartE2EDuration="20.63153831s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.59877169 +0000 UTC m=+1047.593600430" lastFinishedPulling="2026-01-23 14:21:56.183493848 +0000 UTC m=+1063.178322598" observedRunningTime="2026-01-23 14:21:59.604154347 +0000 UTC m=+1066.598983097" watchObservedRunningTime="2026-01-23 14:21:59.63153831 +0000 UTC m=+1066.626367050" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.656681 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" podStartSLOduration=5.555633577 podStartE2EDuration="20.656662388s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.083380764 +0000 UTC m=+1048.078209504" lastFinishedPulling="2026-01-23 14:21:56.184409575 +0000 UTC m=+1063.179238315" observedRunningTime="2026-01-23 14:21:59.656186454 +0000 UTC m=+1066.651015194" watchObservedRunningTime="2026-01-23 14:21:59.656662388 +0000 UTC m=+1066.651491128" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.658322 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" podStartSLOduration=5.360930919 podStartE2EDuration="20.658315186s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.887024248 +0000 UTC m=+1047.881852988" lastFinishedPulling="2026-01-23 14:21:56.184408525 +0000 UTC m=+1063.179237255" observedRunningTime="2026-01-23 14:21:59.631926782 +0000 UTC m=+1066.626755522" watchObservedRunningTime="2026-01-23 14:21:59.658315186 +0000 UTC m=+1066.653143926" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.712374 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" podStartSLOduration=3.702679398 podStartE2EDuration="20.712354671s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.747442086 +0000 UTC m=+1047.742270816" lastFinishedPulling="2026-01-23 14:21:57.757117349 +0000 UTC m=+1064.751946089" observedRunningTime="2026-01-23 14:21:59.706923884 +0000 UTC m=+1066.701752624" watchObservedRunningTime="2026-01-23 14:21:59.712354671 +0000 UTC m=+1066.707183411" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.772224 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podStartSLOduration=3.114418791 podStartE2EDuration="20.772205034s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.26555919 +0000 UTC m=+1048.260387930" lastFinishedPulling="2026-01-23 14:21:58.923345433 +0000 UTC m=+1065.918174173" observedRunningTime="2026-01-23 14:21:59.747100697 +0000 UTC m=+1066.741929437" watchObservedRunningTime="2026-01-23 14:21:59.772205034 +0000 UTC m=+1066.767033774" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.819090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" podStartSLOduration=3.949819813 podStartE2EDuration="20.819071071s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.886903164 +0000 UTC m=+1047.881731904" lastFinishedPulling="2026-01-23 14:21:57.756154422 +0000 UTC m=+1064.750983162" observedRunningTime="2026-01-23 14:21:59.776784847 +0000 UTC m=+1066.771613587" watchObservedRunningTime="2026-01-23 14:21:59.819071071 +0000 UTC m=+1066.813899811" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.833402 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" podStartSLOduration=5.403860712 podStartE2EDuration="20.826390573s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.761855313 +0000 UTC m=+1047.756684053" lastFinishedPulling="2026-01-23 14:21:56.184385164 +0000 UTC m=+1063.179213914" observedRunningTime="2026-01-23 14:21:59.809642998 +0000 UTC m=+1066.804471728" watchObservedRunningTime="2026-01-23 14:21:59.826390573 +0000 UTC m=+1066.821219313" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.834300 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" podStartSLOduration=4.170147865 podStartE2EDuration="20.834280022s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.093006683 +0000 UTC m=+1048.087835423" lastFinishedPulling="2026-01-23 14:21:57.75713884 +0000 UTC m=+1064.751967580" observedRunningTime="2026-01-23 14:21:59.834149608 +0000 UTC m=+1066.828978348" watchObservedRunningTime="2026-01-23 14:21:59.834280022 +0000 UTC m=+1066.829108762" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.848467 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" podStartSLOduration=3.855596186 podStartE2EDuration="20.848447762s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.763393028 +0000 UTC m=+1047.758221768" lastFinishedPulling="2026-01-23 14:21:57.756244614 +0000 UTC m=+1064.751073344" observedRunningTime="2026-01-23 14:21:59.847225277 +0000 UTC m=+1066.842054017" watchObservedRunningTime="2026-01-23 14:21:59.848447762 +0000 UTC m=+1066.843276502" Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.677779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" event={"ID":"44a963d8-d403-42d5-acd2-a0379f07db51","Type":"ContainerStarted","Data":"b5607a5d959ea8494045145fc77e597274b6e96cb462f54a12fe4dd3a0037431"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.679394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" event={"ID":"9f9597bf-12a1-4204-ac57-37c4c0189687","Type":"ContainerStarted","Data":"92884eb603386718641648b0b3ba55f6d7ee1b007c867c2164ee00b7546eac3c"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.685980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" event={"ID":"5a65a9ef-28c7-46ae-826d-5546af1103a5","Type":"ContainerStarted","Data":"fb90c1bd8d9c13fccd3766fa6a713944de3a3ddd42a5f2ac7a5c65417ff3b289"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.686379 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.687322 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" event={"ID":"072b9a9d-8a08-454c-b1b6-628fcdcc91df","Type":"ContainerStarted","Data":"69ef93c55e932a771a522b113fb28f7cb0884c4fce8910cb2ba02a7d540105f6"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.688695 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" event={"ID":"3d7c7bc6-5124-4cd4-a406-448ca94ba640","Type":"ContainerStarted","Data":"e3e8dae1e41645b52484e850549abfc87ead0f1b0fc18a6afe5f7d8a5b2b7e42"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.689631 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" event={"ID":"a07598ff-60cc-482e-a551-af751575709c","Type":"ContainerStarted","Data":"2dfaa1e6313d9ac18f1ecfe6f88daaadbf7fb4098d5b9343a9adb524e6f3eb0b"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.710202 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" podStartSLOduration=21.430681764 podStartE2EDuration="29.71017683s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:58.902586861 +0000 UTC m=+1065.897415601" lastFinishedPulling="2026-01-23 14:22:07.182081877 +0000 UTC m=+1074.176910667" observedRunningTime="2026-01-23 14:22:08.706502383 +0000 UTC m=+1075.701331123" watchObservedRunningTime="2026-01-23 14:22:08.71017683 +0000 UTC m=+1075.705005600" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.611573 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.629864 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.650732 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.733318 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.764887 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.801655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.835538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.901581 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.021333 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.041668 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.288465 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.436946 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.989676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.989863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.999112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.999346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.034954 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.562509 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.727119 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" event={"ID":"313b5382-60cf-4627-8ba7-a091fc457989","Type":"ContainerStarted","Data":"d3b66bc76162b055f5be811ae8e63916a7a92a5f79e785b80deebab7d94ea605"} Jan 23 14:22:15 crc kubenswrapper[4775]: I0123 14:22:15.415468 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.777607 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.778271 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.780221 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.781351 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.804590 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podStartSLOduration=12.276171169 podStartE2EDuration="37.804567176s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.286131026 +0000 UTC m=+1048.280959766" lastFinishedPulling="2026-01-23 14:22:06.814527033 +0000 UTC m=+1073.809355773" observedRunningTime="2026-01-23 14:22:16.796158972 +0000 UTC m=+1083.790987752" watchObservedRunningTime="2026-01-23 14:22:16.804567176 +0000 UTC m=+1083.799395926" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.815012 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podStartSLOduration=14.1443807 podStartE2EDuration="37.814985457s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.302424258 +0000 UTC m=+1048.297252988" lastFinishedPulling="2026-01-23 14:22:04.973028965 +0000 UTC m=+1071.967857745" observedRunningTime="2026-01-23 14:22:16.812221307 +0000 UTC m=+1083.807050097" watchObservedRunningTime="2026-01-23 14:22:16.814985457 +0000 UTC m=+1083.809814237" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.832619 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podStartSLOduration=11.932190107 podStartE2EDuration="37.832601387s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.286933819 +0000 UTC m=+1048.281762559" lastFinishedPulling="2026-01-23 14:22:07.187345069 +0000 UTC m=+1074.182173839" observedRunningTime="2026-01-23 14:22:16.830190308 +0000 UTC m=+1083.825019088" watchObservedRunningTime="2026-01-23 14:22:16.832601387 +0000 UTC m=+1083.827430137" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.849531 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podStartSLOduration=11.966527291 podStartE2EDuration="37.849512027s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.297701881 +0000 UTC m=+1048.292530621" lastFinishedPulling="2026-01-23 14:22:07.180686587 +0000 UTC m=+1074.175515357" observedRunningTime="2026-01-23 14:22:16.847139438 +0000 UTC m=+1083.841968178" watchObservedRunningTime="2026-01-23 14:22:16.849512027 +0000 UTC m=+1083.844340777" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.785070 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.795183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.827011 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" podStartSLOduration=30.525009426 podStartE2EDuration="38.826985323s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:58.923217779 +0000 UTC m=+1065.918046519" lastFinishedPulling="2026-01-23 14:22:07.225193666 +0000 UTC m=+1074.220022416" observedRunningTime="2026-01-23 14:22:17.821476514 +0000 UTC m=+1084.816305294" watchObservedRunningTime="2026-01-23 14:22:17.826985323 +0000 UTC m=+1084.821814103" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.104767 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.109929 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.158482 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.162044 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.824340 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" event={"ID":"313b5382-60cf-4627-8ba7-a091fc457989","Type":"ContainerStarted","Data":"45eeb3bc3fb6584943a2df57f12324ee8c36534129f85f7e57654aa0b142ab49"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.824706 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.826035 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerStarted","Data":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.826454 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.827746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" event={"ID":"f9da51f1-a035-44b8-9391-0d6018a84c61","Type":"ContainerStarted","Data":"806e8aecea719f6e700353416c729b360dfa041467a7209c9d2bb8907b9ae312"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.830574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" event={"ID":"0784c928-e0c5-4afb-99cb-4f1f96820a14","Type":"ContainerStarted","Data":"e169751a6f6db310d8818ff117c620bf08fa6be49c30a4cab5e099963e416b30"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.830973 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.832822 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" event={"ID":"91da96b4-921a-4b88-9804-55745989e08b","Type":"ContainerStarted","Data":"906e3088174e265e5243cc6871b9f7408a37073496c495427f28cebdbcb04706"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.833036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.859862 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" podStartSLOduration=43.8598392 podStartE2EDuration="43.8598392s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:22:22.855602997 +0000 UTC m=+1089.850431737" watchObservedRunningTime="2026-01-23 14:22:22.8598392 +0000 UTC m=+1089.854667940" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.877142 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podStartSLOduration=3.159022954 podStartE2EDuration="43.877126941s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.265354864 +0000 UTC m=+1048.260183614" lastFinishedPulling="2026-01-23 14:22:21.983458841 +0000 UTC m=+1088.978287601" observedRunningTime="2026-01-23 14:22:22.872010503 +0000 UTC m=+1089.866839243" watchObservedRunningTime="2026-01-23 14:22:22.877126941 +0000 UTC m=+1089.871955681" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.893363 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podStartSLOduration=2.295776203 podStartE2EDuration="42.89334594s" podCreationTimestamp="2026-01-23 14:21:40 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.385726639 +0000 UTC m=+1048.380555379" lastFinishedPulling="2026-01-23 14:22:21.983296356 +0000 UTC m=+1088.978125116" observedRunningTime="2026-01-23 14:22:22.89193872 +0000 UTC m=+1089.886767510" watchObservedRunningTime="2026-01-23 14:22:22.89334594 +0000 UTC m=+1089.888174680" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.912345 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podStartSLOduration=2.816980289 podStartE2EDuration="43.91232997s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.886581045 +0000 UTC m=+1047.881409785" lastFinishedPulling="2026-01-23 14:22:21.981930706 +0000 UTC m=+1088.976759466" observedRunningTime="2026-01-23 14:22:22.911692772 +0000 UTC m=+1089.906521542" watchObservedRunningTime="2026-01-23 14:22:22.91232997 +0000 UTC m=+1089.907158710" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.931009 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podStartSLOduration=3.041341456 podStartE2EDuration="43.930990891s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.092638952 +0000 UTC m=+1048.087467692" lastFinishedPulling="2026-01-23 14:22:21.982288367 +0000 UTC m=+1088.977117127" observedRunningTime="2026-01-23 14:22:22.925978285 +0000 UTC m=+1089.920807025" watchObservedRunningTime="2026-01-23 14:22:22.930990891 +0000 UTC m=+1089.925819641" Jan 23 14:22:23 crc kubenswrapper[4775]: I0123 14:22:23.218698 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:22:23 crc kubenswrapper[4775]: I0123 14:22:23.218784 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:22:29 crc kubenswrapper[4775]: I0123 14:22:29.888562 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:22:30 crc kubenswrapper[4775]: I0123 14:22:30.138603 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:22:30 crc kubenswrapper[4775]: I0123 14:22:30.206885 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:22:32 crc kubenswrapper[4775]: I0123 14:22:32.042868 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.606940 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.610921 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.613056 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.613956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614047 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614079 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614163 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614285 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614358 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-88xgt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.623619 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722443 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722486 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722525 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722715 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722769 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824363 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824491 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824724 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824789 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826144 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.829616 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.829679 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/387dfcf2d9ab5c99f04714504985073c98a182156e32a8a18238fa00e934eb7b/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.830605 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.832863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.833260 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.842227 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.844745 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.880839 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.898955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.900367 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903373 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903538 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-tvlpb" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903735 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903768 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.925070 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.951833 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027339 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027767 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027911 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027967 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.028052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.028075 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.094587 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.096185 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.100490 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-cr4k4" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.100920 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.101129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.104279 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.109639 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.114173 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129684 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129765 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129786 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129823 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129858 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129878 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129905 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129929 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130015 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130042 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131234 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131687 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131978 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.132972 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.134699 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.134720 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8aeee34c9457791b5054d3c85b310ed049d8dc36b23beea77ad5efef6c10870/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.135198 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.151669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.155642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.160915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.174915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.221867 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.229602 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230636 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230657 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230814 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.231128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.231913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.232962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.238881 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242017 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242571 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-zlrt7" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242874 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243147 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243256 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.244183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.256974 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.263915 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.263970 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bfe8abef1b31634e899a0e76673a6d30481acfe91057b467659b7666645fb84/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.264154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.270708 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.275428 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.301742 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.401764 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.402580 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.409683 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-n8szm" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.409940 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.418457 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.422626 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449651 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449778 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449813 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449828 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449846 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449881 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449894 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449930 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550652 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550672 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550692 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550712 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550728 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550744 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550760 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550813 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.551720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.551942 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.552213 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.552243 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.556587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.556961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558064 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558334 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558373 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/625e5d6e98a1815fffeff764585e01bbe4bc815f98a1e4d18fbfd842578c912b/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.562021 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.564523 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.580619 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.584533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.610277 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.718286 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.760389 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: W0123 14:22:43.772569 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401a94b6_0628_4cea_b62a_c3229a913d16.slice/crio-c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798 WatchSource:0}: Error finding container c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798: Status 404 returned error can't find the container with id c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798 Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.872922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.904821 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: W0123 14:22:43.912455 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372c512d_5894_49da_ae1e_cb3e54aadacc.slice/crio-7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc WatchSource:0}: Error finding container 7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc: Status 404 returned error can't find the container with id 7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.031193 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.032239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"681b7a688d4f04046470e3a96437a041c22fa95e2914d20b3a8b6ffbf246ee9d"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.033109 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.144918 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: W0123 14:22:44.149010 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1f7aa1_1780_4ccb_b1a5_66b9b279d555.slice/crio-f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4 WatchSource:0}: Error finding container f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4: Status 404 returned error can't find the container with id f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4 Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.307190 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: W0123 14:22:44.317889 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b05c189_a694_4cbc_b679_a974e6bf99bc.slice/crio-936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5 WatchSource:0}: Error finding container 936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5: Status 404 returned error can't find the container with id 936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5 Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.519985 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.528052 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.533665 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.533986 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.534097 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-clnx5" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.538764 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.552721 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666781 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666927 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666994 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667049 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667175 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667211 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768818 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768915 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769016 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769047 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769102 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770550 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.772965 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.774419 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.774473 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/39a3b2bcc0c98acb393397741892ba686c2627f8ce2bbec98169f9c6b68efb3a/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.775471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.777503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.791025 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.809216 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.854500 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.107131 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555","Type":"ContainerStarted","Data":"f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4"} Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.123430 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5"} Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.441016 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:46 crc kubenswrapper[4775]: I0123 14:22:46.136785 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"806899daf9f516e2ac0cf4380290e79c972a42cbdb37df750944114a549d2e34"} Jan 23 14:22:53 crc kubenswrapper[4775]: I0123 14:22:53.220317 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:22:53 crc kubenswrapper[4775]: I0123 14:22:53.220948 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.361862 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.362600 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_nova-kuttl-default(372c512d-5894-49da-ae1e-cb3e54aadacc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.363886 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-galera-0" podUID="372c512d-5894-49da-ae1e-cb3e54aadacc" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.856042 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.856947 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pprk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_nova-kuttl-default(481cbe1b-2796-4ad2-a342-3661afa62383): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.858610 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="481cbe1b-2796-4ad2-a342-3661afa62383" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.908957 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.909474 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6nvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_nova-kuttl-default(70288c27-7f95-4843-a8fb-f2ac58ea8e1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.910822 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="70288c27-7f95-4843-a8fb-f2ac58ea8e1f" Jan 23 14:23:01 crc kubenswrapper[4775]: I0123 14:23:01.263206 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555","Type":"ContainerStarted","Data":"de03829a0d3a13c8fdc5349b7411f0d4b6c906ba5769d37486b0147c5d6ff421"} Jan 23 14:23:01 crc kubenswrapper[4775]: E0123 14:23:01.265851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="481cbe1b-2796-4ad2-a342-3661afa62383" Jan 23 14:23:01 crc kubenswrapper[4775]: E0123 14:23:01.265863 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-galera-0" podUID="372c512d-5894-49da-ae1e-cb3e54aadacc" Jan 23 14:23:01 crc kubenswrapper[4775]: I0123 14:23:01.346906 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.665656536 podStartE2EDuration="18.346795916s" podCreationTimestamp="2026-01-23 14:22:43 +0000 UTC" firstStartedPulling="2026-01-23 14:22:44.150509037 +0000 UTC m=+1111.145337797" lastFinishedPulling="2026-01-23 14:23:00.831648447 +0000 UTC m=+1127.826477177" observedRunningTime="2026-01-23 14:23:01.339404412 +0000 UTC m=+1128.334233202" watchObservedRunningTime="2026-01-23 14:23:01.346795916 +0000 UTC m=+1128.341624676" Jan 23 14:23:02 crc kubenswrapper[4775]: I0123 14:23:02.273270 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.283117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb"} Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.286235 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb"} Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.293367 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316"} Jan 23 14:23:08 crc kubenswrapper[4775]: I0123 14:23:08.719893 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 23 14:23:13 crc kubenswrapper[4775]: I0123 14:23:13.382516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297"} Jan 23 14:23:13 crc kubenswrapper[4775]: I0123 14:23:13.384653 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba"} Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.420289 4775 generic.go:334] "Generic (PLEG): container finished" podID="481cbe1b-2796-4ad2-a342-3661afa62383" containerID="31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba" exitCode=0 Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.420382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerDied","Data":"31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba"} Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.424463 4775 generic.go:334] "Generic (PLEG): container finished" podID="372c512d-5894-49da-ae1e-cb3e54aadacc" containerID="defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297" exitCode=0 Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.424508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerDied","Data":"defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.438974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"e58c53ea0950bc5520e5855a7d3316040fecf0f185fec6cf47fa56ff5be619e0"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.443435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"2701bae5178de9ed981cd188c27d1927ae6bbb38cd686c28b1cf6af8a68e9d4f"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.481616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=7.915844805 podStartE2EDuration="36.481592331s" podCreationTimestamp="2026-01-23 14:22:42 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.92132343 +0000 UTC m=+1110.916152170" lastFinishedPulling="2026-01-23 14:23:12.487070916 +0000 UTC m=+1139.481899696" observedRunningTime="2026-01-23 14:23:18.471952962 +0000 UTC m=+1145.466781772" watchObservedRunningTime="2026-01-23 14:23:18.481592331 +0000 UTC m=+1145.476421101" Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.501229 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=8.467581303 podStartE2EDuration="35.501203969s" podCreationTimestamp="2026-01-23 14:22:43 +0000 UTC" firstStartedPulling="2026-01-23 14:22:45.455007995 +0000 UTC m=+1112.449836775" lastFinishedPulling="2026-01-23 14:23:12.488630671 +0000 UTC m=+1139.483459441" observedRunningTime="2026-01-23 14:23:18.493717412 +0000 UTC m=+1145.488546182" watchObservedRunningTime="2026-01-23 14:23:18.501203969 +0000 UTC m=+1145.496032749" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.219484 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.221507 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.221673 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.222399 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.222567 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" gracePeriod=600 Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.423862 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.424331 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497647 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" exitCode=0 Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497691 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497725 4775 scope.go:117] "RemoveContainer" containerID="fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.547700 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.645146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.509025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.855291 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.855363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.950601 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:25 crc kubenswrapper[4775]: I0123 14:23:25.683616 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.185368 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.187434 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.190625 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.248356 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.295069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.295197 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.397349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.397475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.399124 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.436237 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.559705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.080573 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.337372 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.338864 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.350871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.414574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.414690 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.434782 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.435910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.439896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.449781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515327 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515377 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.516140 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.542540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.616541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.616600 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.619086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.632682 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.633871 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.644345 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.657734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.659493 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerStarted","Data":"405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f"} Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.683850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.717226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.717272 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.749519 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.751993 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.752559 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.763207 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.763655 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819185 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819751 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819872 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.821315 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.841613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.920953 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.921073 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.921674 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.946690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.993403 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.050734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.131837 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.136192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:34 crc kubenswrapper[4775]: W0123 14:23:34.159630 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda477c0f_52c9_4e94_894f_d953e46afd95.slice/crio-79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83 WatchSource:0}: Error finding container 79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83: Status 404 returned error can't find the container with id 79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83 Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.374021 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:34 crc kubenswrapper[4775]: W0123 14:23:34.380507 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2887a864_f392_4887_8b38_bde90ef8f18d.slice/crio-02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485 WatchSource:0}: Error finding container 02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485: Status 404 returned error can't find the container with id 02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485 Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.538771 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.668018 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerStarted","Data":"4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.668943 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerStarted","Data":"79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.669060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerStarted","Data":"6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.672504 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerStarted","Data":"a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.673896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerStarted","Data":"45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.673932 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerStarted","Data":"236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.675863 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerStarted","Data":"02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.694665 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-czhxs" podStartSLOduration=2.694640706 podStartE2EDuration="2.694640706s" podCreationTimestamp="2026-01-23 14:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:34.686090508 +0000 UTC m=+1161.680919248" watchObservedRunningTime="2026-01-23 14:23:34.694640706 +0000 UTC m=+1161.689469446" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.725627 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" podStartSLOduration=1.725603362 podStartE2EDuration="1.725603362s" podCreationTimestamp="2026-01-23 14:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:34.703290126 +0000 UTC m=+1161.698118886" watchObservedRunningTime="2026-01-23 14:23:34.725603362 +0000 UTC m=+1161.720432122" Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.685601 4775 generic.go:334] "Generic (PLEG): container finished" podID="da477c0f-52c9-4e94-894f-d953e46afd95" containerID="4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.685702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerDied","Data":"4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.688288 4775 generic.go:334] "Generic (PLEG): container finished" podID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerID="750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.688431 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerDied","Data":"750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.690709 4775 generic.go:334] "Generic (PLEG): container finished" podID="147f8416-94ee-4e77-bda1-ad3a06658335" containerID="a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.690811 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerDied","Data":"a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.692779 4775 generic.go:334] "Generic (PLEG): container finished" podID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerID="45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.692904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerDied","Data":"45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.695030 4775 generic.go:334] "Generic (PLEG): container finished" podID="2887a864-f392-4887-8b38-bde90ef8f18d" containerID="a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.695083 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerDied","Data":"a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.697074 4775 generic.go:334] "Generic (PLEG): container finished" podID="401a94b6-0628-4cea-b62a-c3229a913d16" containerID="951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.697119 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerDied","Data":"951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.707086 4775 generic.go:334] "Generic (PLEG): container finished" podID="70288c27-7f95-4843-a8fb-f2ac58ea8e1f" containerID="2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb" exitCode=0 Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.707166 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerDied","Data":"2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.710062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"9b88c2031cdf95c8db38b2e10aae786e8def8ffce67f1f2bf5cc8f4b11ad1afb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.710450 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.711936 4775 generic.go:334] "Generic (PLEG): container finished" podID="4b05c189-a694-4cbc-b679-a974e6bf99bc" containerID="67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316" exitCode=0 Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.712067 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerDied","Data":"67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.086853 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.105135 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=38.980406865 podStartE2EDuration="56.105113741s" podCreationTimestamp="2026-01-23 14:22:41 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.775811106 +0000 UTC m=+1110.770639846" lastFinishedPulling="2026-01-23 14:23:00.900517942 +0000 UTC m=+1127.895346722" observedRunningTime="2026-01-23 14:23:36.804547116 +0000 UTC m=+1163.799375896" watchObservedRunningTime="2026-01-23 14:23:37.105113741 +0000 UTC m=+1164.099942491" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.111858 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.124923 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.134785 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.148937 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.181642 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"2887a864-f392-4887-8b38-bde90ef8f18d\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182021 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182162 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"da477c0f-52c9-4e94-894f-d953e46afd95\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182263 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"c7a04db9-60c9-4bce-8100-18a4134d0c86\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182381 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"da477c0f-52c9-4e94-894f-d953e46afd95\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182482 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a5345f7-7dc8-4e09-8566-ee1dbb897cce" (UID: "7a5345f7-7dc8-4e09-8566-ee1dbb897cce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182613 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"c7a04db9-60c9-4bce-8100-18a4134d0c86\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182672 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7a04db9-60c9-4bce-8100-18a4134d0c86" (UID: "c7a04db9-60c9-4bce-8100-18a4134d0c86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182752 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182780 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da477c0f-52c9-4e94-894f-d953e46afd95" (UID: "da477c0f-52c9-4e94-894f-d953e46afd95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182882 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"2887a864-f392-4887-8b38-bde90ef8f18d\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183401 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2887a864-f392-4887-8b38-bde90ef8f18d" (UID: "2887a864-f392-4887-8b38-bde90ef8f18d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183725 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183753 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183765 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183778 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186267 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n" (OuterVolumeSpecName: "kube-api-access-9f62n") pod "c7a04db9-60c9-4bce-8100-18a4134d0c86" (UID: "c7a04db9-60c9-4bce-8100-18a4134d0c86"). InnerVolumeSpecName "kube-api-access-9f62n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186359 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw" (OuterVolumeSpecName: "kube-api-access-ptpbw") pod "7a5345f7-7dc8-4e09-8566-ee1dbb897cce" (UID: "7a5345f7-7dc8-4e09-8566-ee1dbb897cce"). InnerVolumeSpecName "kube-api-access-ptpbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186510 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4" (OuterVolumeSpecName: "kube-api-access-xzbz4") pod "2887a864-f392-4887-8b38-bde90ef8f18d" (UID: "2887a864-f392-4887-8b38-bde90ef8f18d"). InnerVolumeSpecName "kube-api-access-xzbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186979 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92" (OuterVolumeSpecName: "kube-api-access-94j92") pod "da477c0f-52c9-4e94-894f-d953e46afd95" (UID: "da477c0f-52c9-4e94-894f-d953e46afd95"). InnerVolumeSpecName "kube-api-access-94j92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285058 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"147f8416-94ee-4e77-bda1-ad3a06658335\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"147f8416-94ee-4e77-bda1-ad3a06658335\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285397 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285415 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285425 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285434 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285916 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147f8416-94ee-4e77-bda1-ad3a06658335" (UID: "147f8416-94ee-4e77-bda1-ad3a06658335"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.287905 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4" (OuterVolumeSpecName: "kube-api-access-6dfv4") pod "147f8416-94ee-4e77-bda1-ad3a06658335" (UID: "147f8416-94ee-4e77-bda1-ad3a06658335"). InnerVolumeSpecName "kube-api-access-6dfv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.387474 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.387551 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.723299 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.725709 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerDied","Data":"236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731743 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerDied","Data":"02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731786 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731808 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"1c3fabd85eddaecf2f2a4c36001f074199e14c03147bf42a4502d2dc2d54274a"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.740884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"09e16692a9886632fbbac4f6d7eb58e6a68df930b36cf38577c12306ef8533ee"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.745667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerDied","Data":"79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.745735 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.746015 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770648 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerDied","Data":"6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770708 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770795 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800476 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerDied","Data":"405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800499 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.668080 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.674698 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.805897 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.831737 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=40.30453698 podStartE2EDuration="56.831715971s" podCreationTimestamp="2026-01-23 14:22:42 +0000 UTC" firstStartedPulling="2026-01-23 14:22:44.320003706 +0000 UTC m=+1111.314832446" lastFinishedPulling="2026-01-23 14:23:00.847182697 +0000 UTC m=+1127.842011437" observedRunningTime="2026-01-23 14:23:38.824370378 +0000 UTC m=+1165.819199128" watchObservedRunningTime="2026-01-23 14:23:38.831715971 +0000 UTC m=+1165.826544711" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.850710 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=-9223371979.004097 podStartE2EDuration="57.8506789s" podCreationTimestamp="2026-01-23 14:22:41 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.27522248 +0000 UTC m=+1110.270051210" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:38.84928625 +0000 UTC m=+1165.844115010" watchObservedRunningTime="2026-01-23 14:23:38.8506789 +0000 UTC m=+1165.845507640" Jan 23 14:23:39 crc kubenswrapper[4775]: I0123 14:23:39.722761 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" path="/var/lib/kubelet/pods/147f8416-94ee-4e77-bda1-ad3a06658335/volumes" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.209201 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.209978 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.209994 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210013 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210021 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210029 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210036 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210059 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210066 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210079 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210085 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210250 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210264 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210286 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210294 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210306 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.213294 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.242050 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.366866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.367302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.470506 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.471046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.472069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.502458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.526797 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.955026 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.014353 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.842902 4775 generic.go:334] "Generic (PLEG): container finished" podID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerID="e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d" exitCode=0 Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.842981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerDied","Data":"e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d"} Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.843288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerStarted","Data":"8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545"} Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.236334 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.313793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"498646da-c28d-4b9c-b61b-cd3c3b59455d\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.313985 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"498646da-c28d-4b9c-b61b-cd3c3b59455d\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.315005 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "498646da-c28d-4b9c-b61b-cd3c3b59455d" (UID: "498646da-c28d-4b9c-b61b-cd3c3b59455d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.322668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf" (OuterVolumeSpecName: "kube-api-access-x42sf") pod "498646da-c28d-4b9c-b61b-cd3c3b59455d" (UID: "498646da-c28d-4b9c-b61b-cd3c3b59455d"). InnerVolumeSpecName "kube-api-access-x42sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.415486 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.415788 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863103 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerDied","Data":"8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545"} Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863361 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863315 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:48 crc kubenswrapper[4775]: I0123 14:23:48.681660 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:48 crc kubenswrapper[4775]: I0123 14:23:48.693200 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:49 crc kubenswrapper[4775]: I0123 14:23:49.728970 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" path="/var/lib/kubelet/pods/498646da-c28d-4b9c-b61b-cd3c3b59455d/volumes" Jan 23 14:23:52 crc kubenswrapper[4775]: I0123 14:23:52.957354 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.227561 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685415 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:53 crc kubenswrapper[4775]: E0123 14:23:53.685777 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685791 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685986 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.686503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.688576 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.700409 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.727059 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.728510 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737170 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737376 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737410 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737625 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.752087 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773631 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773682 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773732 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.774083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.774147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875056 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875245 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875308 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875420 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.876105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.881011 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.881538 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.894003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.910208 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.009087 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.055914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.465855 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.521634 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.953081 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerStarted","Data":"4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88"} Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.955166 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerStarted","Data":"4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2"} Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.955214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerStarted","Data":"59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de"} Jan 23 14:23:55 crc kubenswrapper[4775]: E0123 14:23:55.394579 4775 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:46714->38.102.83.177:38819: write tcp 38.102.83.177:46714->38.102.83.177:38819: write: broken pipe Jan 23 14:23:55 crc kubenswrapper[4775]: I0123 14:23:55.966330 4775 generic.go:334] "Generic (PLEG): container finished" podID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerID="4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2" exitCode=0 Jan 23 14:23:55 crc kubenswrapper[4775]: I0123 14:23:55.966375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerDied","Data":"4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2"} Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.243553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.318484 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"ccc48032-9af5-4d79-bc89-f7d576911b23\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.318679 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"ccc48032-9af5-4d79-bc89-f7d576911b23\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.319385 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccc48032-9af5-4d79-bc89-f7d576911b23" (UID: "ccc48032-9af5-4d79-bc89-f7d576911b23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.323714 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl" (OuterVolumeSpecName: "kube-api-access-nwnwl") pod "ccc48032-9af5-4d79-bc89-f7d576911b23" (UID: "ccc48032-9af5-4d79-bc89-f7d576911b23"). InnerVolumeSpecName "kube-api-access-nwnwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.420992 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.421557 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978683 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerDied","Data":"59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de"} Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978723 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978799 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:24:01 crc kubenswrapper[4775]: I0123 14:24:01.011349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerStarted","Data":"b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647"} Jan 23 14:24:04 crc kubenswrapper[4775]: I0123 14:24:04.045121 4775 generic.go:334] "Generic (PLEG): container finished" podID="2c017749-eae9-4edd-91eb-21b25275a986" containerID="b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647" exitCode=0 Jan 23 14:24:04 crc kubenswrapper[4775]: I0123 14:24:04.045248 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerDied","Data":"b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647"} Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.490866 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574063 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574217 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.580977 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b" (OuterVolumeSpecName: "kube-api-access-mm74b") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "kube-api-access-mm74b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.601778 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.619409 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data" (OuterVolumeSpecName: "config-data") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675514 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675553 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675567 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerDied","Data":"4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88"} Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064705 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064646 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.304974 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:06 crc kubenswrapper[4775]: E0123 14:24:06.305395 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305415 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: E0123 14:24:06.305433 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305441 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305606 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305626 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.306244 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.311288 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.311757 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312096 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312375 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312608 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.335194 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384552 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384617 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384723 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384906 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.457515 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.458632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.460226 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.460402 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.461269 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.475954 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486177 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486446 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486597 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489878 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489937 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489949 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.494922 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.515745 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.516613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587633 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587709 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587999 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.633720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.690270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691155 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691322 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691423 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691449 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.697221 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.703219 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.708450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.715925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.786128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:07 crc kubenswrapper[4775]: I0123 14:24:07.105488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:07 crc kubenswrapper[4775]: I0123 14:24:07.212482 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:07 crc kubenswrapper[4775]: W0123 14:24:07.234540 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc22eb7b9_6c07_4edc_a7f7_9e9c4f5acfe6.slice/crio-eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251 WatchSource:0}: Error finding container eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251: Status 404 returned error can't find the container with id eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251 Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.087570 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerStarted","Data":"2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.087627 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerStarted","Data":"bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.090039 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerStarted","Data":"eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.137091 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-czlxx" podStartSLOduration=2.137066597 podStartE2EDuration="2.137066597s" podCreationTimestamp="2026-01-23 14:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:08.127366504 +0000 UTC m=+1195.122195284" watchObservedRunningTime="2026-01-23 14:24:08.137066597 +0000 UTC m=+1195.131895347" Jan 23 14:24:10 crc kubenswrapper[4775]: I0123 14:24:10.106205 4775 generic.go:334] "Generic (PLEG): container finished" podID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerID="2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812" exitCode=0 Jan 23 14:24:10 crc kubenswrapper[4775]: I0123 14:24:10.106404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerDied","Data":"2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812"} Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.119637 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerStarted","Data":"29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3"} Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.144704 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-sgnh6" podStartSLOduration=2.154391547 podStartE2EDuration="5.144678869s" podCreationTimestamp="2026-01-23 14:24:06 +0000 UTC" firstStartedPulling="2026-01-23 14:24:07.238173246 +0000 UTC m=+1194.233001986" lastFinishedPulling="2026-01-23 14:24:10.228460568 +0000 UTC m=+1197.223289308" observedRunningTime="2026-01-23 14:24:11.143776694 +0000 UTC m=+1198.138605524" watchObservedRunningTime="2026-01-23 14:24:11.144678869 +0000 UTC m=+1198.139507649" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.611584 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789093 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789141 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789207 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789229 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789324 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.796792 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.796923 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts" (OuterVolumeSpecName: "scripts") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.797883 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.798247 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw" (OuterVolumeSpecName: "kube-api-access-njhlw") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "kube-api-access-njhlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.826917 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.829487 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data" (OuterVolumeSpecName: "config-data") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892316 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892377 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892400 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892419 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892437 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892459 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.130920 4775 generic.go:334] "Generic (PLEG): container finished" podID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerID="29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3" exitCode=0 Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.131044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerDied","Data":"29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3"} Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerDied","Data":"bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb"} Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133279 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133345 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.791182 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.798092 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.893737 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:12 crc kubenswrapper[4775]: E0123 14:24:12.894384 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.894535 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.894922 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.895687 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.898606 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.898748 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.900560 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.901056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.901056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908533 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908677 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.926657 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.010209 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011137 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011210 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011302 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.015665 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.016271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.017475 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.018078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.018442 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.037339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.238024 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.457138 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521942 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.522658 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs" (OuterVolumeSpecName: "logs") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523159 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523377 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.552033 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts" (OuterVolumeSpecName: "scripts") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.552555 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr" (OuterVolumeSpecName: "kube-api-access-s2pnr") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "kube-api-access-s2pnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.570010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data" (OuterVolumeSpecName: "config-data") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.604020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624749 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624814 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624826 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624837 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.702876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:13 crc kubenswrapper[4775]: W0123 14:24:13.710134 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5498924_f821_48fa_88a0_6d8c0c7c01de.slice/crio-cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37 WatchSource:0}: Error finding container cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37: Status 404 returned error can't find the container with id cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37 Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.733457 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" path="/var/lib/kubelet/pods/730f1c4d-a54d-4644-bf76-c3c4541e8f6d/volumes" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.150527 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerStarted","Data":"03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.150575 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerStarted","Data":"cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.152952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerDied","Data":"eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.152985 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.153041 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.186237 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" podStartSLOduration=2.186216238 podStartE2EDuration="2.186216238s" podCreationTimestamp="2026-01-23 14:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:14.179609371 +0000 UTC m=+1201.174438161" watchObservedRunningTime="2026-01-23 14:24:14.186216238 +0000 UTC m=+1201.181044998" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268291 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: E0123 14:24:14.268686 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268704 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268906 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.269866 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.271697 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.272758 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.273246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.285405 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335447 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335554 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335589 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335619 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335657 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437214 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437264 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437305 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437370 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.444339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.444587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.445729 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.464457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.601126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.609184 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.876756 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: W0123 14:24:14.885699 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b653824_2e32_431a_8b16_f8687610c0fe.slice/crio-8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8 WatchSource:0}: Error finding container 8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8: Status 404 returned error can't find the container with id 8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8 Jan 23 14:24:15 crc kubenswrapper[4775]: I0123 14:24:15.163621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"d66c47dd24a97c5e406bb8f8c2966868508dde888e26b413ba616252a0af9cfd"} Jan 23 14:24:15 crc kubenswrapper[4775]: I0123 14:24:15.163677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8"} Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177177 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"bcbb533ea799345eeb794af31254e6286aa09085890e15fe08353e9133460887"} Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177683 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177711 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.215635 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" podStartSLOduration=2.215601177 podStartE2EDuration="2.215601177s" podCreationTimestamp="2026-01-23 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:16.210207755 +0000 UTC m=+1203.205036525" watchObservedRunningTime="2026-01-23 14:24:16.215601177 +0000 UTC m=+1203.210429957" Jan 23 14:24:17 crc kubenswrapper[4775]: I0123 14:24:17.185020 4775 generic.go:334] "Generic (PLEG): container finished" podID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerID="03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a" exitCode=0 Jan 23 14:24:17 crc kubenswrapper[4775]: I0123 14:24:17.185110 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerDied","Data":"03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a"} Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.555283 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.619622 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620691 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.621059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.621361 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626221 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc" (OuterVolumeSpecName: "kube-api-access-bdxrc") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "kube-api-access-bdxrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626218 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts" (OuterVolumeSpecName: "scripts") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626700 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.628306 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.646031 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.646786 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data" (OuterVolumeSpecName: "config-data") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724057 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724116 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724140 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724158 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724175 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724193 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208767 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerDied","Data":"cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37"} Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208855 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208978 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.439924 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:19 crc kubenswrapper[4775]: E0123 14:24:19.440599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.440645 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.441093 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.442211 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.446503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.446848 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.447284 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.451114 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.454059 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.537877 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538294 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.539143 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640554 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640666 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640728 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640826 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640897 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.645212 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.646030 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.646222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.648104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.652437 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.667556 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.769446 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:20 crc kubenswrapper[4775]: I0123 14:24:20.273870 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.228061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d978f-gdlmv" event={"ID":"898c8554-82c6-4777-8869-15981e356a84","Type":"ContainerStarted","Data":"250fdd231ab968eff2e27c95649eb833386f8def0f65064953417f57128c73ed"} Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.228746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d978f-gdlmv" event={"ID":"898c8554-82c6-4777-8869-15981e356a84","Type":"ContainerStarted","Data":"94dc95cb3a14628d6cfd18a608edddc1e814760956d313fb91c94f21cda39255"} Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.230114 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.268516 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-7d978f-gdlmv" podStartSLOduration=2.268493288 podStartE2EDuration="2.268493288s" podCreationTimestamp="2026-01-23 14:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:21.261525931 +0000 UTC m=+1208.256354681" watchObservedRunningTime="2026-01-23 14:24:21.268493288 +0000 UTC m=+1208.263322058" Jan 23 14:24:45 crc kubenswrapper[4775]: I0123 14:24:45.699155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:45 crc kubenswrapper[4775]: I0123 14:24:45.707735 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:51 crc kubenswrapper[4775]: I0123 14:24:51.342867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.459331 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.462066 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.467378 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-tkffp" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.468118 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.468350 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.469962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657403 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657622 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759340 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759378 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.761342 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.769372 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.769394 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.789501 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.802579 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 23 14:24:54 crc kubenswrapper[4775]: I0123 14:24:54.078496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:54 crc kubenswrapper[4775]: I0123 14:24:54.572484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"76733f2d-491c-45dd-bcf5-1a4423019717","Type":"ContainerStarted","Data":"f85a7383fa7a8273d0fb0dbacfd2e742a75b8cbb16bb2e5e6028fd2297c0d9af"} Jan 23 14:25:02 crc kubenswrapper[4775]: I0123 14:25:02.639117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"76733f2d-491c-45dd-bcf5-1a4423019717","Type":"ContainerStarted","Data":"489441ff2ea4269ef000c88513b580c4205fc44985bfbde6f23c1ce7ded2b2ea"} Jan 23 14:25:02 crc kubenswrapper[4775]: I0123 14:25:02.672747 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=1.653875485 podStartE2EDuration="9.672719149s" podCreationTimestamp="2026-01-23 14:24:53 +0000 UTC" firstStartedPulling="2026-01-23 14:24:54.082849957 +0000 UTC m=+1241.077678707" lastFinishedPulling="2026-01-23 14:25:02.101693591 +0000 UTC m=+1249.096522371" observedRunningTime="2026-01-23 14:25:02.662064298 +0000 UTC m=+1249.656893078" watchObservedRunningTime="2026-01-23 14:25:02.672719149 +0000 UTC m=+1249.667547929" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.061801 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.062939 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" containerID="cri-o://e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" gracePeriod=10 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.107595 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.107847 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" containerID="cri-o://29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" gracePeriod=10 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.403359 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.404486 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.445298 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.518049 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-2sllt" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.535901 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.589177 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.637615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.662429 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.692856 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.738814 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"355da547-d965-4754-8730-b9c8a20fd930\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.738859 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"9bad88d6-5ca9-4176-904d-72b793e1361e\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.742141 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4" (OuterVolumeSpecName: "kube-api-access-qnvz4") pod "355da547-d965-4754-8730-b9c8a20fd930" (UID: "355da547-d965-4754-8730-b9c8a20fd930"). InnerVolumeSpecName "kube-api-access-qnvz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.742204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg" (OuterVolumeSpecName: "kube-api-access-gskfg") pod "9bad88d6-5ca9-4176-904d-72b793e1361e" (UID: "9bad88d6-5ca9-4176-904d-72b793e1361e"). InnerVolumeSpecName "kube-api-access-gskfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786621 4775 generic.go:334] "Generic (PLEG): container finished" podID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" exitCode=0 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786683 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerDied","Data":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerDied","Data":"3b31a7012ea48421023dcf9b284625ce3e8507aa2773ce103b29a5ca80ded146"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786723 4775 scope.go:117] "RemoveContainer" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786860 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793606 4775 generic.go:334] "Generic (PLEG): container finished" podID="355da547-d965-4754-8730-b9c8a20fd930" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" exitCode=0 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerDied","Data":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793726 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerDied","Data":"9226d2ede7beb9208ad931c1d54e8ae0eea8cc9501e5c82efcf4ccfa1586382e"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793815 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.816783 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.824864 4775 scope.go:117] "RemoveContainer" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825135 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: E0123 14:25:16.825334 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": container with ID starting with e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774 not found: ID does not exist" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825368 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} err="failed to get container status \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": rpc error: code = NotFound desc = could not find container \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": container with ID starting with e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774 not found: ID does not exist" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825395 4775 scope.go:117] "RemoveContainer" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.830376 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.835021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.840355 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.840394 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.843753 4775 scope.go:117] "RemoveContainer" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: E0123 14:25:16.844175 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": container with ID starting with 29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d not found: ID does not exist" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.844202 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} err="failed to get container status \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": rpc error: code = NotFound desc = could not find container \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": container with ID starting with 29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d not found: ID does not exist" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.845777 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.280368 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:17 crc kubenswrapper[4775]: W0123 14:25:17.288867 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552805f7_e5f6_447b_a319_a3e3d62608f3.slice/crio-31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b WatchSource:0}: Error finding container 31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b: Status 404 returned error can't find the container with id 31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.725135 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355da547-d965-4754-8730-b9c8a20fd930" path="/var/lib/kubelet/pods/355da547-d965-4754-8730-b9c8a20fd930/volumes" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.725897 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" path="/var/lib/kubelet/pods/9bad88d6-5ca9-4176-904d-72b793e1361e/volumes" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.811900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerStarted","Data":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.811952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerStarted","Data":"31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b"} Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.843702 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-xx8wj" podStartSLOduration=1.685060688 podStartE2EDuration="1.843678626s" podCreationTimestamp="2026-01-23 14:25:16 +0000 UTC" firstStartedPulling="2026-01-23 14:25:17.290689517 +0000 UTC m=+1264.285518257" lastFinishedPulling="2026-01-23 14:25:17.449307415 +0000 UTC m=+1264.444136195" observedRunningTime="2026-01-23 14:25:17.8381586 +0000 UTC m=+1264.832987340" watchObservedRunningTime="2026-01-23 14:25:17.843678626 +0000 UTC m=+1264.838507376" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.142191 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564220 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:19 crc kubenswrapper[4775]: E0123 14:25:19.564868 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564899 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: E0123 14:25:19.564928 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564967 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.565422 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.565460 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.566520 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.596628 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.685020 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.787602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.815478 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.835357 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-xx8wj" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" containerID="cri-o://6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" gracePeriod=2 Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.892736 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.244079 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.396570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"552805f7-e5f6-447b-a319-a3e3d62608f3\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.405027 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2" (OuterVolumeSpecName: "kube-api-access-f2fd2") pod "552805f7-e5f6-447b-a319-a3e3d62608f3" (UID: "552805f7-e5f6-447b-a319-a3e3d62608f3"). InnerVolumeSpecName "kube-api-access-f2fd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.441070 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:20 crc kubenswrapper[4775]: W0123 14:25:20.447628 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f375c8_5d62_4cbb_b348_8205d476d603.slice/crio-968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f WatchSource:0}: Error finding container 968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f: Status 404 returned error can't find the container with id 968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.499234 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846120 4775 generic.go:334] "Generic (PLEG): container finished" podID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" exitCode=0 Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846242 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerDied","Data":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.847905 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerDied","Data":"31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.847940 4775 scope.go:117] "RemoveContainer" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.850013 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-x4gqk" event={"ID":"78f375c8-5d62-4cbb-b348-8205d476d603","Type":"ContainerStarted","Data":"cbccf2dbb603d4f8c6c8b3929f8ded1dfcb1ccd264450f78faa8ec3434116628"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.850039 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-x4gqk" event={"ID":"78f375c8-5d62-4cbb-b348-8205d476d603","Type":"ContainerStarted","Data":"968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.888751 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-x4gqk" podStartSLOduration=1.8153065819999998 podStartE2EDuration="1.888722714s" podCreationTimestamp="2026-01-23 14:25:19 +0000 UTC" firstStartedPulling="2026-01-23 14:25:20.45212413 +0000 UTC m=+1267.446952910" lastFinishedPulling="2026-01-23 14:25:20.525540262 +0000 UTC m=+1267.520369042" observedRunningTime="2026-01-23 14:25:20.883509406 +0000 UTC m=+1267.878338166" watchObservedRunningTime="2026-01-23 14:25:20.888722714 +0000 UTC m=+1267.883551484" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.893676 4775 scope.go:117] "RemoveContainer" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: E0123 14:25:20.894109 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": container with ID starting with 6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0 not found: ID does not exist" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.894148 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} err="failed to get container status \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": rpc error: code = NotFound desc = could not find container \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": container with ID starting with 6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0 not found: ID does not exist" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.908107 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.920139 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:21 crc kubenswrapper[4775]: I0123 14:25:21.721041 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" path="/var/lib/kubelet/pods/552805f7-e5f6-447b-a319-a3e3d62608f3/volumes" Jan 23 14:25:23 crc kubenswrapper[4775]: I0123 14:25:23.219462 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:25:23 crc kubenswrapper[4775]: I0123 14:25:23.219929 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.893347 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.894031 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.928307 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.991275 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.411107 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:38 crc kubenswrapper[4775]: E0123 14:25:38.411972 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.411988 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.412183 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.413503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.421614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nklzs" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.450629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512073 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512658 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614188 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614281 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.615171 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.637107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.748092 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:39 crc kubenswrapper[4775]: I0123 14:25:39.217446 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:39 crc kubenswrapper[4775]: W0123 14:25:39.227047 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7025f67_434a_4dba_9b3a_e3b809f5c614.slice/crio-c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd WatchSource:0}: Error finding container c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd: Status 404 returned error can't find the container with id c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.048687 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="739e20da7cc2d594cd007d49d1cb4d46d86d97e2f87bb0cc8db7e7ba0f7c49e2" exitCode=0 Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.048758 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"739e20da7cc2d594cd007d49d1cb4d46d86d97e2f87bb0cc8db7e7ba0f7c49e2"} Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.049184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerStarted","Data":"c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd"} Jan 23 14:25:41 crc kubenswrapper[4775]: I0123 14:25:41.061209 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="81145fa9cc8e22af9d5f3739f292c51f9e7e1303411fc02184f15488fcaee2bc" exitCode=0 Jan 23 14:25:41 crc kubenswrapper[4775]: I0123 14:25:41.061371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"81145fa9cc8e22af9d5f3739f292c51f9e7e1303411fc02184f15488fcaee2bc"} Jan 23 14:25:42 crc kubenswrapper[4775]: I0123 14:25:42.076290 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="e1c8ccb7e0efad01a74a7bcb2e81ffe8f5651380b879f58fb9e879f6851a180a" exitCode=0 Jan 23 14:25:42 crc kubenswrapper[4775]: I0123 14:25:42.076408 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"e1c8ccb7e0efad01a74a7bcb2e81ffe8f5651380b879f58fb9e879f6851a180a"} Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.514329 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.557866 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.558280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.558342 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.561096 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle" (OuterVolumeSpecName: "bundle") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.566949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4" (OuterVolumeSpecName: "kube-api-access-zfxq4") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "kube-api-access-zfxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.572947 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util" (OuterVolumeSpecName: "util") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661039 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661109 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661140 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103793 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd"} Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103849 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd" Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103959 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.590962 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="pull" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592712 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="pull" Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592783 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592863 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592924 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="util" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592981 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="util" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.593197 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.593727 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.595695 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.610387 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.612221 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mh2wz" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646250 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646318 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748325 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.755442 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.757785 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.762720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.911309 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:49 crc kubenswrapper[4775]: I0123 14:25:49.490558 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:49 crc kubenswrapper[4775]: W0123 14:25:49.504265 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92377252_2e4d_48bb_95ea_724a4ff5c788.slice/crio-0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620 WatchSource:0}: Error finding container 0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620: Status 404 returned error can't find the container with id 0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620 Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.169677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" event={"ID":"92377252-2e4d-48bb-95ea-724a4ff5c788","Type":"ContainerStarted","Data":"ca2136b21ddc8d912619d58ffef5ca99beab2de7fc777ad707902d08a38fd5cb"} Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.169976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" event={"ID":"92377252-2e4d-48bb-95ea-724a4ff5c788","Type":"ContainerStarted","Data":"0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620"} Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.170121 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.211537 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" podStartSLOduration=2.211510081 podStartE2EDuration="2.211510081s" podCreationTimestamp="2026-01-23 14:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:25:50.203778027 +0000 UTC m=+1297.198606807" watchObservedRunningTime="2026-01-23 14:25:50.211510081 +0000 UTC m=+1297.206338831" Jan 23 14:25:53 crc kubenswrapper[4775]: I0123 14:25:53.218517 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:25:53 crc kubenswrapper[4775]: I0123 14:25:53.218884 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:25:58 crc kubenswrapper[4775]: I0123 14:25:58.917557 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.219592 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220111 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220748 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220798 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" gracePeriod=600 Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444154 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" exitCode=0 Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444206 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444245 4775 scope.go:117] "RemoveContainer" containerID="04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.458764 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.846692 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.847604 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.870358 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.015731 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.016198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.046279 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.047296 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.053421 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.054590 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.059302 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.071139 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.084395 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.117686 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.117746 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.118485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.137997 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.139186 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.146727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.150112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.172417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219892 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.247250 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.248204 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.253759 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.259662 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321951 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.337794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.341364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.368467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.376357 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423711 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423771 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423820 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423871 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.424699 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.442704 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.443598 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.447168 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.450093 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.458117 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.493048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525783 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525885 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525925 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.526857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.551458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.586670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.614882 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.627961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.628176 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.628705 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: W0123 14:26:25.628725 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdce4e03_ab75_4cf0_ae3c_8a9fff7ee6ff.slice/crio-6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01 WatchSource:0}: Error finding container 6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01: Status 404 returned error can't find the container with id 6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01 Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.646835 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.766615 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.864154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: W0123 14:26:25.868685 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed8da8c_1d52_44a3_b1c8_b68000003d91.slice/crio-ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58 WatchSource:0}: Error finding container ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58: Status 404 returned error can't find the container with id ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58 Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.879790 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.093543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.117650 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:26 crc kubenswrapper[4775]: W0123 14:26:26.123668 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcce1ea66_c6e5_41e7_b0fc_f915fab736f9.slice/crio-b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9 WatchSource:0}: Error finding container b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9: Status 404 returned error can't find the container with id b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.240189 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:26 crc kubenswrapper[4775]: W0123 14:26:26.244217 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5980f4a0_814a_4f66_b637_80071a62061b.slice/crio-d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303 WatchSource:0}: Error finding container d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303: Status 404 returned error can't find the container with id d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484590 4775 generic.go:334] "Generic (PLEG): container finished" podID="26928cf5-7a29-4fab-a501-5746726fc42a" containerID="cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerDied","Data":"cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerStarted","Data":"403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486188 4775 generic.go:334] "Generic (PLEG): container finished" podID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerID="ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerDied","Data":"ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486235 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerStarted","Data":"ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.487658 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerStarted","Data":"d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.489979 4775 generic.go:334] "Generic (PLEG): container finished" podID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerID="50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.490061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerDied","Data":"50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.490099 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerStarted","Data":"6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491765 4775 generic.go:334] "Generic (PLEG): container finished" podID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerID="8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491842 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerDied","Data":"8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerStarted","Data":"b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493123 4775 generic.go:334] "Generic (PLEG): container finished" podID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerID="3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493153 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerDied","Data":"3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerStarted","Data":"1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6"} Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.509031 4775 generic.go:334] "Generic (PLEG): container finished" podID="5980f4a0-814a-4f66-b637-80071a62061b" containerID="dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0" exitCode=0 Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.509910 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerDied","Data":"dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0"} Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.965701 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.086506 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.086850 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.087009 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ed8da8c-1d52-44a3-b1c8-b68000003d91" (UID: "9ed8da8c-1d52-44a3-b1c8-b68000003d91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.088015 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.092004 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv" (OuterVolumeSpecName: "kube-api-access-jmgkv") pod "9ed8da8c-1d52-44a3-b1c8-b68000003d91" (UID: "9ed8da8c-1d52-44a3-b1c8-b68000003d91"). InnerVolumeSpecName "kube-api-access-jmgkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.189307 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.230578 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.240723 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.256529 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.271873 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391877 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"26928cf5-7a29-4fab-a501-5746726fc42a\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391971 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392032 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392071 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"26928cf5-7a29-4fab-a501-5746726fc42a\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392095 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392148 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392412 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26928cf5-7a29-4fab-a501-5746726fc42a" (UID: "26928cf5-7a29-4fab-a501-5746726fc42a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392616 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cce1ea66-c6e5-41e7-b0fc-f915fab736f9" (UID: "cce1ea66-c6e5-41e7-b0fc-f915fab736f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.393515 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" (UID: "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.393754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" (UID: "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396029 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b" (OuterVolumeSpecName: "kube-api-access-m6k7b") pod "cce1ea66-c6e5-41e7-b0fc-f915fab736f9" (UID: "cce1ea66-c6e5-41e7-b0fc-f915fab736f9"). InnerVolumeSpecName "kube-api-access-m6k7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx" (OuterVolumeSpecName: "kube-api-access-zczcx") pod "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" (UID: "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff"). InnerVolumeSpecName "kube-api-access-zczcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396288 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4" (OuterVolumeSpecName: "kube-api-access-cc8t4") pod "26928cf5-7a29-4fab-a501-5746726fc42a" (UID: "26928cf5-7a29-4fab-a501-5746726fc42a"). InnerVolumeSpecName "kube-api-access-cc8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr" (OuterVolumeSpecName: "kube-api-access-bjrhr") pod "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" (UID: "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4"). InnerVolumeSpecName "kube-api-access-bjrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.494659 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495124 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495150 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495170 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495190 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495209 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495230 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519538 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519531 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerDied","Data":"403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519687 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerDied","Data":"ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523445 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523508 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerDied","Data":"6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527344 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527344 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529175 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerDied","Data":"b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529208 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529248 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.530916 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.531044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerDied","Data":"1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.531377 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.838696 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.002128 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"5980f4a0-814a-4f66-b637-80071a62061b\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.002348 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"5980f4a0-814a-4f66-b637-80071a62061b\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.003560 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5980f4a0-814a-4f66-b637-80071a62061b" (UID: "5980f4a0-814a-4f66-b637-80071a62061b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.008670 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n" (OuterVolumeSpecName: "kube-api-access-ld67n") pod "5980f4a0-814a-4f66-b637-80071a62061b" (UID: "5980f4a0-814a-4f66-b637-80071a62061b"). InnerVolumeSpecName "kube-api-access-ld67n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.104187 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.104241 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerDied","Data":"d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303"} Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544581 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544590 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.609993 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610588 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610604 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610623 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610633 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610648 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610656 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610703 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610711 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610749 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610757 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610775 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610783 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611280 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611308 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611321 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611332 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611343 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611354 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.613970 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.613970 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.614870 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-42x4x" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.644496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729778 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729876 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.838871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.839006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.868294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.931971 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:31 crc kubenswrapper[4775]: W0123 14:26:31.386183 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c069034_d3fc_478b_a45d_2d6c64baf640.slice/crio-da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6 WatchSource:0}: Error finding container da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6: Status 404 returned error can't find the container with id da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6 Jan 23 14:26:31 crc kubenswrapper[4775]: I0123 14:26:31.387083 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:31 crc kubenswrapper[4775]: I0123 14:26:31.565132 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerStarted","Data":"da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6"} Jan 23 14:26:40 crc kubenswrapper[4775]: I0123 14:26:40.655301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerStarted","Data":"16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a"} Jan 23 14:26:40 crc kubenswrapper[4775]: I0123 14:26:40.681253 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" podStartSLOduration=2.379133384 podStartE2EDuration="10.681236869s" podCreationTimestamp="2026-01-23 14:26:30 +0000 UTC" firstStartedPulling="2026-01-23 14:26:31.388592866 +0000 UTC m=+1338.383421616" lastFinishedPulling="2026-01-23 14:26:39.690696361 +0000 UTC m=+1346.685525101" observedRunningTime="2026-01-23 14:26:40.677744128 +0000 UTC m=+1347.672572868" watchObservedRunningTime="2026-01-23 14:26:40.681236869 +0000 UTC m=+1347.676065609" Jan 23 14:26:51 crc kubenswrapper[4775]: I0123 14:26:51.773180 4775 generic.go:334] "Generic (PLEG): container finished" podID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerID="16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a" exitCode=0 Jan 23 14:26:51 crc kubenswrapper[4775]: I0123 14:26:51.773264 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerDied","Data":"16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a"} Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.101409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.145935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.146371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.146547 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.153909 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr" (OuterVolumeSpecName: "kube-api-access-ccfqr") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "kube-api-access-ccfqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.161026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts" (OuterVolumeSpecName: "scripts") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.186608 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data" (OuterVolumeSpecName: "config-data") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248454 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248493 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248510 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerDied","Data":"da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6"} Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794599 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794678 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.932647 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:53 crc kubenswrapper[4775]: E0123 14:26:53.933148 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.933180 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.933430 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.934094 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.936304 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-42x4x" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.937421 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.954266 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.059900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.060228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.161706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.161859 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.168919 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.180913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.251291 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.553079 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:55 crc kubenswrapper[4775]: W0123 14:26:54.560851 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c5ea649_3ec6_4684_a543_92cbb2561c2c.slice/crio-aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a WatchSource:0}: Error finding container aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a: Status 404 returned error can't find the container with id aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.806174 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerStarted","Data":"0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7"} Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.807223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.807239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerStarted","Data":"aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a"} Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.296233 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.325586 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=6.325555491 podStartE2EDuration="6.325555491s" podCreationTimestamp="2026-01-23 14:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:26:54.822643489 +0000 UTC m=+1361.817472239" watchObservedRunningTime="2026-01-23 14:26:59.325555491 +0000 UTC m=+1366.320384271" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.767853 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.769124 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.771731 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.777357 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.785710 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.971968 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.972040 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.972115 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.040872 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.042067 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.043651 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.050254 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.071543 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072554 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072871 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072920 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.079244 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.082294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.083782 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.087302 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.117464 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.131079 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.139964 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.141390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.144192 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.155646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176172 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176451 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176475 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176667 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176707 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.221986 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.223199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.228224 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.234192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.277669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.277715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278183 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278343 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.279019 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.279457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.296510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.297007 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.297968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.298303 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.300038 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.300068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.372964 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.383314 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.383401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.485577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.485654 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.490795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.503562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.520369 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.529879 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.542504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.644131 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.682865 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.683800 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.687833 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.687962 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691605 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691765 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.721993 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797722 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797773 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797851 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.803363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.812736 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.815028 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.818943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.869474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"68f512301f6d964a7e5e33ce512013bee3b54f46f7a054e898c3f9210e426230"} Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.870851 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerStarted","Data":"dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.006577 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.052373 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.079986 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: W0123 14:27:01.102796 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd12f6cf_eef0_4d55_8500_2d64ed9e7648.slice/crio-c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5 WatchSource:0}: Error finding container c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5: Status 404 returned error can't find the container with id c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5 Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.107876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.501398 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.886538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerStarted","Data":"4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.886965 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerStarted","Data":"188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.888346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerStarted","Data":"f3a42cea8fd58140cfe12473c775a1de35761c7ed3cab47b52b03cbea0efb84b"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.889937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerStarted","Data":"204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.893121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"ed764791e32d9123ae4beaa7c6d7c2307e2b1a91e61e749a1d2402749b2f21a1"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.894688 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerStarted","Data":"c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.905262 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" podStartSLOduration=1.9052463880000001 podStartE2EDuration="1.905246388s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:01.899550593 +0000 UTC m=+1368.894379343" watchObservedRunningTime="2026-01-23 14:27:01.905246388 +0000 UTC m=+1368.900075128" Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.920128 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" podStartSLOduration=2.920075926 podStartE2EDuration="2.920075926s" podCreationTimestamp="2026-01-23 14:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:01.911603361 +0000 UTC m=+1368.906432101" watchObservedRunningTime="2026-01-23 14:27:01.920075926 +0000 UTC m=+1368.914904676" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.922558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.922920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.925151 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerStarted","Data":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.927310 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerStarted","Data":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.929150 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.929180 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.949268 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.009893812 podStartE2EDuration="4.949247014s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.06594812 +0000 UTC m=+1368.060776860" lastFinishedPulling="2026-01-23 14:27:04.005301322 +0000 UTC m=+1371.000130062" observedRunningTime="2026-01-23 14:27:04.938082901 +0000 UTC m=+1371.932911651" watchObservedRunningTime="2026-01-23 14:27:04.949247014 +0000 UTC m=+1371.944075764" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.966646 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=1.798131851 podStartE2EDuration="4.966626356s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:00.807963084 +0000 UTC m=+1367.802791824" lastFinishedPulling="2026-01-23 14:27:03.976457599 +0000 UTC m=+1370.971286329" observedRunningTime="2026-01-23 14:27:04.964417743 +0000 UTC m=+1371.959246483" watchObservedRunningTime="2026-01-23 14:27:04.966626356 +0000 UTC m=+1371.961455096" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.982590 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.103174548 podStartE2EDuration="4.982574477s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.111128806 +0000 UTC m=+1368.105957546" lastFinishedPulling="2026-01-23 14:27:03.990528735 +0000 UTC m=+1370.985357475" observedRunningTime="2026-01-23 14:27:04.975195234 +0000 UTC m=+1371.970023974" watchObservedRunningTime="2026-01-23 14:27:04.982574477 +0000 UTC m=+1371.977403217" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.992085 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.084008784 podStartE2EDuration="4.992070392s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.097258565 +0000 UTC m=+1368.092087305" lastFinishedPulling="2026-01-23 14:27:04.005320133 +0000 UTC m=+1371.000148913" observedRunningTime="2026-01-23 14:27:04.990894018 +0000 UTC m=+1371.985722778" watchObservedRunningTime="2026-01-23 14:27:04.992070392 +0000 UTC m=+1371.986899132" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.521559 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.530948 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.531057 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.543616 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:07 crc kubenswrapper[4775]: I0123 14:27:07.965335 4775 generic.go:334] "Generic (PLEG): container finished" podID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerID="204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7" exitCode=0 Jan 23 14:27:07 crc kubenswrapper[4775]: I0123 14:27:07.965439 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerDied","Data":"204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7"} Jan 23 14:27:08 crc kubenswrapper[4775]: I0123 14:27:08.978627 4775 generic.go:334] "Generic (PLEG): container finished" podID="470fdecf-a054-4735-90e9-82e8f2df7393" containerID="4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115" exitCode=0 Jan 23 14:27:08 crc kubenswrapper[4775]: I0123 14:27:08.978753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerDied","Data":"4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115"} Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.389401 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458395 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458504 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458621 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.465871 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts" (OuterVolumeSpecName: "scripts") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.466837 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6" (OuterVolumeSpecName: "kube-api-access-xgft6") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "kube-api-access-xgft6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.500527 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data" (OuterVolumeSpecName: "config-data") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560947 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560981 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560994 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.990850 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerDied","Data":"dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d"} Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.991274 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.990995 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186339 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186606 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" containerID="cri-o://f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186771 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" containerID="cri-o://b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.231339 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.231534 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256059 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256268 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" containerID="cri-o://f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256381 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.292158 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373437 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373529 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.376915 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8" (OuterVolumeSpecName: "kube-api-access-xnsz8") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "kube-api-access-xnsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.377512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts" (OuterVolumeSpecName: "scripts") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.408989 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data" (OuterVolumeSpecName: "config-data") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475697 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475739 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475759 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.549510 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.581635 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.812517 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.858791 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880609 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880670 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880696 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880739 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880799 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.881869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs" (OuterVolumeSpecName: "logs") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.882015 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs" (OuterVolumeSpecName: "logs") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.885668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69" (OuterVolumeSpecName: "kube-api-access-zzp69") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "kube-api-access-zzp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.886199 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d" (OuterVolumeSpecName: "kube-api-access-bx22d") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "kube-api-access-bx22d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.900507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data" (OuterVolumeSpecName: "config-data") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.907460 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data" (OuterVolumeSpecName: "config-data") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982660 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982685 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982695 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982704 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982713 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982721 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001070 4775 generic.go:334] "Generic (PLEG): container finished" podID="d2a774c2-1605-4329-bd98-fba72cd66171" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" exitCode=0 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002505 4775 generic.go:334] "Generic (PLEG): container finished" podID="d2a774c2-1605-4329-bd98-fba72cd66171" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" exitCode=143 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001165 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002818 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"68f512301f6d964a7e5e33ce512013bee3b54f46f7a054e898c3f9210e426230"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002863 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007374 4775 generic.go:334] "Generic (PLEG): container finished" podID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" exitCode=0 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007397 4775 generic.go:334] "Generic (PLEG): container finished" podID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" exitCode=143 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007463 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"ed764791e32d9123ae4beaa7c6d7c2307e2b1a91e61e749a1d2402749b2f21a1"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007672 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009065 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerDied","Data":"188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009090 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009112 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.020173 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.032580 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.073713 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.082879 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.082922 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} err="failed to get container status \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.082949 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.085384 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.085541 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} err="failed to get container status \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.085562 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.086747 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} err="failed to get container status \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.086766 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.087586 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} err="failed to get container status \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.087635 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.106905 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.123773 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.127546 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.132786 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.140849 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141239 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141257 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141273 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141280 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141296 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141303 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141314 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141320 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141334 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141340 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141351 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141357 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141495 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141507 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141518 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141528 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141537 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141548 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.160519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.160675 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.161205 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161235 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} err="failed to get container status \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161258 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.161611 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161669 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} err="failed to get container status \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161696 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163469 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163468 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} err="failed to get container status \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163529 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.165513 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} err="failed to get container status \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.178946 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186378 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.199573 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.203916 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.204082 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.209278 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.217245 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.218777 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.220593 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.226727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.232605 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293852 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293927 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293966 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294055 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.297457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.324890 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395521 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.396099 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.401929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.402244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.414690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.419086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.503818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.514711 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.533737 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.725344 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" path="/var/lib/kubelet/pods/ade3732b-4731-4318-a3ef-7c97825a71ed/volumes" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.726259 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" path="/var/lib/kubelet/pods/d2a774c2-1605-4329-bd98-fba72cd66171/volumes" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.868060 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.903354 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.903556 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.908280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv" (OuterVolumeSpecName: "kube-api-access-bbqdv") pod "cd12f6cf-eef0-4d55-8500-2d64ed9e7648" (UID: "cd12f6cf-eef0-4d55-8500-2d64ed9e7648"). InnerVolumeSpecName "kube-api-access-bbqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.937122 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data" (OuterVolumeSpecName: "config-data") pod "cd12f6cf-eef0-4d55-8500-2d64ed9e7648" (UID: "cd12f6cf-eef0-4d55-8500-2d64ed9e7648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.006254 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.006296 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018363 4775 generic.go:334] "Generic (PLEG): container finished" podID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" exitCode=0 Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018427 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018463 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerDied","Data":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018519 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerDied","Data":"c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5"} Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018640 4775 scope.go:117] "RemoveContainer" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.045730 4775 scope.go:117] "RemoveContainer" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: E0123 14:27:12.046395 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": container with ID starting with fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15 not found: ID does not exist" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.046456 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} err="failed to get container status \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": rpc error: code = NotFound desc = could not find container \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": container with ID starting with fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15 not found: ID does not exist" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.052585 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.065924 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074173 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: E0123 14:27:12.074567 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074587 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074846 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.075466 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.079898 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.090447 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.108116 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.108190 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.117873 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: W0123 14:27:12.121396 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354efd80_1bfe_4969_80e5_6ba275d34697.slice/crio-79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e WatchSource:0}: Error finding container 79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e: Status 404 returned error can't find the container with id 79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.132558 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.148015 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.212079 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.212294 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.218130 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.228046 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.390270 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.817511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: W0123 14:27:12.817558 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87285e2b_3522_41c7_800d_1ae2d92cfb18.slice/crio-18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38 WatchSource:0}: Error finding container 18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38: Status 404 returned error can't find the container with id 18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38 Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.034861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerStarted","Data":"67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.034906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerStarted","Data":"18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036627 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036644 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"56bd4495cf68ab5a756bf3af1e37ce78f529d5988a8c39d8d01e594b1f0ddb64"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040529 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerStarted","Data":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040692 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerStarted","Data":"d8f1f0f6e7f62499789debda728a77acf84ec6f7e20d7816daa6f9e8b8134f7b"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040857 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.055366 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.055349065 podStartE2EDuration="1.055349065s" podCreationTimestamp="2026-01-23 14:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.051334349 +0000 UTC m=+1380.046163089" watchObservedRunningTime="2026-01-23 14:27:13.055349065 +0000 UTC m=+1380.050177795" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.079156 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.079139302 podStartE2EDuration="2.079139302s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.07214694 +0000 UTC m=+1380.066975690" watchObservedRunningTime="2026-01-23 14:27:13.079139302 +0000 UTC m=+1380.073968042" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.096589 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.096570786 podStartE2EDuration="2.096570786s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.09568651 +0000 UTC m=+1380.090515260" watchObservedRunningTime="2026-01-23 14:27:13.096570786 +0000 UTC m=+1380.091399536" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.112912 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.112889698 podStartE2EDuration="2.112889698s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.110415916 +0000 UTC m=+1380.105244676" watchObservedRunningTime="2026-01-23 14:27:13.112889698 +0000 UTC m=+1380.107718468" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.727015 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" path="/var/lib/kubelet/pods/cd12f6cf-eef0-4d55-8500-2d64ed9e7648/volumes" Jan 23 14:27:16 crc kubenswrapper[4775]: I0123 14:27:16.533931 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:16 crc kubenswrapper[4775]: I0123 14:27:16.534324 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:17 crc kubenswrapper[4775]: I0123 14:27:17.390776 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.504621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.505225 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.534658 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.534753 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.567750 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.164768 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.165856 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.168798 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.168998 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.204967 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285298 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285415 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285449 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387486 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.391082 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.395787 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.401378 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.428338 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.441492 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.493214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.583760 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.675575 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676658 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.133:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676727 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676789 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.133:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.015724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.558520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerStarted","Data":"711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b"} Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.558945 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerStarted","Data":"c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c"} Jan 23 14:27:27 crc kubenswrapper[4775]: I0123 14:27:27.601975 4775 generic.go:334] "Generic (PLEG): container finished" podID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerID="711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b" exitCode=0 Jan 23 14:27:27 crc kubenswrapper[4775]: I0123 14:27:27.602090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerDied","Data":"711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b"} Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.057073 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128711 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128753 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128826 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.134408 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8" (OuterVolumeSpecName: "kube-api-access-psfd8") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "kube-api-access-psfd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.135908 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts" (OuterVolumeSpecName: "scripts") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.169463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data" (OuterVolumeSpecName: "config-data") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230452 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230485 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230494 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627258 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerDied","Data":"c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c"} Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627628 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627771 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864200 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864544 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" containerID="cri-o://2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" gracePeriod=30 Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864743 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" containerID="cri-o://7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" gracePeriod=30 Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.935359 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.935693 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028451 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028778 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" containerID="cri-o://afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028917 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.639246 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc956fab-2268-4862-a43b-57501989f228" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" exitCode=143 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.639344 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.641907 4775 generic.go:334] "Generic (PLEG): container finished" podID="354efd80-1bfe-4969-80e5-6ba275d34697" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" exitCode=143 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.641954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.393459 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.396473 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.398780 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.398887 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.474120 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.592742 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609771 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609848 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609999 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.610362 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs" (OuterVolumeSpecName: "logs") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.611844 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.620006 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9" (OuterVolumeSpecName: "kube-api-access-x96t9") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "kube-api-access-x96t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.635209 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data" (OuterVolumeSpecName: "config-data") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683097 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc956fab-2268-4862-a43b-57501989f228" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" exitCode=0 Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683195 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683201 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"56bd4495cf68ab5a756bf3af1e37ce78f529d5988a8c39d8d01e594b1f0ddb64"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683243 4775 scope.go:117] "RemoveContainer" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686362 4775 generic.go:334] "Generic (PLEG): container finished" podID="354efd80-1bfe-4969-80e5-6ba275d34697" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" exitCode=0 Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686396 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686615 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.712727 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713044 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713209 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713566 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713667 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.714537 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs" (OuterVolumeSpecName: "logs") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.718875 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf" (OuterVolumeSpecName: "kube-api-access-297qf") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "kube-api-access-297qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.737554 4775 scope.go:117] "RemoveContainer" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.739175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data" (OuterVolumeSpecName: "config-data") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.741216 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759059 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759119 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759507 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759524 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759547 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759558 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759570 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759578 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759588 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759598 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759615 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759623 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759795 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759838 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759863 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759881 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759904 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.761248 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.763051 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.787449 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810016 4775 scope.go:117] "RemoveContainer" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.810459 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": container with ID starting with 7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442 not found: ID does not exist" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810496 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} err="failed to get container status \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": rpc error: code = NotFound desc = could not find container \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": container with ID starting with 7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810522 4775 scope.go:117] "RemoveContainer" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.811258 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": container with ID starting with 2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde not found: ID does not exist" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.811294 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} err="failed to get container status \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": rpc error: code = NotFound desc = could not find container \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": container with ID starting with 2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.811322 4775 scope.go:117] "RemoveContainer" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.815162 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.815335 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816166 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816438 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816504 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816555 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.832247 4775 scope.go:117] "RemoveContainer" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.854079 4775 scope.go:117] "RemoveContainer" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.856009 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": container with ID starting with e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1 not found: ID does not exist" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.856073 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} err="failed to get container status \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": rpc error: code = NotFound desc = could not find container \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": container with ID starting with e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.856108 4775 scope.go:117] "RemoveContainer" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.857060 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": container with ID starting with afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0 not found: ID does not exist" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.857106 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} err="failed to get container status \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": rpc error: code = NotFound desc = could not find container \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": container with ID starting with afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.917979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918028 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.923327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.950531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.014843 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.024980 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.047558 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.048981 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.053117 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.090666 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.115515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121361 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121434 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223348 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223528 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.224171 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.229527 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.281429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.481412 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: W0123 14:27:34.617739 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c54c9a_246a_4dab_af73_779d4d8539e4.slice/crio-cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e WatchSource:0}: Error finding container cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e: Status 404 returned error can't find the container with id cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.618708 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.715614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e"} Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.719552 4775 generic.go:334] "Generic (PLEG): container finished" podID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" exitCode=0 Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.719556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerDied","Data":"67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561"} Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.841734 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947211 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"87285e2b-3522-41c7-800d-1ae2d92cfb18\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947851 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"87285e2b-3522-41c7-800d-1ae2d92cfb18\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.951057 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr" (OuterVolumeSpecName: "kube-api-access-q2xkr") pod "87285e2b-3522-41c7-800d-1ae2d92cfb18" (UID: "87285e2b-3522-41c7-800d-1ae2d92cfb18"). InnerVolumeSpecName "kube-api-access-q2xkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.968068 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data" (OuterVolumeSpecName: "config-data") pod "87285e2b-3522-41c7-800d-1ae2d92cfb18" (UID: "87285e2b-3522-41c7-800d-1ae2d92cfb18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.049086 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.049122 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.731925 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" path="/var/lib/kubelet/pods/354efd80-1bfe-4969-80e5-6ba275d34697/volumes" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.733250 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc956fab-2268-4862-a43b-57501989f228" path="/var/lib/kubelet/pods/fc956fab-2268-4862-a43b-57501989f228/volumes" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.738207 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.738287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742271 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742295 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"0da722dd90642caf85fa0f11331565aec51183c8f53f1cf43b2602bc06530edf"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerDied","Data":"18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744253 4775 scope.go:117] "RemoveContainer" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744302 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.778214 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.778189111 podStartE2EDuration="2.778189111s" podCreationTimestamp="2026-01-23 14:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:35.764723941 +0000 UTC m=+1402.759552721" watchObservedRunningTime="2026-01-23 14:27:35.778189111 +0000 UTC m=+1402.773017891" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.798963 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.7989346400000001 podStartE2EDuration="1.79893464s" podCreationTimestamp="2026-01-23 14:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:35.789211699 +0000 UTC m=+1402.784040489" watchObservedRunningTime="2026-01-23 14:27:35.79893464 +0000 UTC m=+1402.793763420" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.821307 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.831525 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: E0123 14:27:35.840342 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840359 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840507 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.841018 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.843548 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.858569 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.865262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.865358 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.966743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.966886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.981057 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.981073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.158935 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.654058 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:36 crc kubenswrapper[4775]: W0123 14:27:36.659354 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e96bb87_5923_457f_bf02_51a1182e90bc.slice/crio-5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460 WatchSource:0}: Error finding container 5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460: Status 404 returned error can't find the container with id 5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460 Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.754684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerStarted","Data":"5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460"} Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.725104 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" path="/var/lib/kubelet/pods/87285e2b-3522-41c7-800d-1ae2d92cfb18/volumes" Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.768282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerStarted","Data":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.801332 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.801310353 podStartE2EDuration="2.801310353s" podCreationTimestamp="2026-01-23 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:37.792265111 +0000 UTC m=+1404.787093851" watchObservedRunningTime="2026-01-23 14:27:37.801310353 +0000 UTC m=+1404.796139113" Jan 23 14:27:39 crc kubenswrapper[4775]: I0123 14:27:39.482177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:39 crc kubenswrapper[4775]: I0123 14:27:39.483692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:41 crc kubenswrapper[4775]: I0123 14:27:41.160113 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.330416 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.332596 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.353130 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391235 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391295 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.493389 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.493487 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.522209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.707383 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.172576 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818450 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" exitCode=0 Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd"} Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818765 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"f19856368022300caf4e899bc52e3098520248d22b5c1d6097fb57b313c3d83f"} Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.820170 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.116360 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.116679 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.481869 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.481950 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.200469 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.200430 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.564322 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.564779 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.835727 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.160205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.206411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.861562 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:48 crc kubenswrapper[4775]: I0123 14:27:48.857628 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" exitCode=0 Jan 23 14:27:48 crc kubenswrapper[4775]: I0123 14:27:48.857969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} Jan 23 14:27:50 crc kubenswrapper[4775]: I0123 14:27:50.879433 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} Jan 23 14:27:50 crc kubenswrapper[4775]: I0123 14:27:50.901336 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpjzl" podStartSLOduration=2.826238434 podStartE2EDuration="8.901306744s" podCreationTimestamp="2026-01-23 14:27:42 +0000 UTC" firstStartedPulling="2026-01-23 14:27:43.81988104 +0000 UTC m=+1410.814709780" lastFinishedPulling="2026-01-23 14:27:49.89494936 +0000 UTC m=+1416.889778090" observedRunningTime="2026-01-23 14:27:50.899506092 +0000 UTC m=+1417.894334872" watchObservedRunningTime="2026-01-23 14:27:50.901306744 +0000 UTC m=+1417.896135534" Jan 23 14:27:52 crc kubenswrapper[4775]: I0123 14:27:52.708505 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:52 crc kubenswrapper[4775]: I0123 14:27:52.708877 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:53 crc kubenswrapper[4775]: I0123 14:27:53.772850 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpjzl" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" probeResult="failure" output=< Jan 23 14:27:53 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:27:53 crc kubenswrapper[4775]: > Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.121957 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.122583 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.126571 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.127464 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.485920 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.486493 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.489872 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.489946 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.912976 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.919326 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:28:02 crc kubenswrapper[4775]: I0123 14:28:02.783118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:02 crc kubenswrapper[4775]: I0123 14:28:02.857560 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:03 crc kubenswrapper[4775]: I0123 14:28:03.894112 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.017843 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpjzl" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" containerID="cri-o://9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" gracePeriod=2 Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.551710 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583589 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583832 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583923 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.584349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities" (OuterVolumeSpecName: "utilities") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.586512 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.591345 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv" (OuterVolumeSpecName: "kube-api-access-hjkhv") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "kube-api-access-hjkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.688595 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.696676 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.789685 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028763 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" exitCode=0 Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028876 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.029408 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"f19856368022300caf4e899bc52e3098520248d22b5c1d6097fb57b313c3d83f"} Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.029458 4775 scope.go:117] "RemoveContainer" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.060734 4775 scope.go:117] "RemoveContainer" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.080520 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.086102 4775 scope.go:117] "RemoveContainer" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.097544 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.123043 4775 scope.go:117] "RemoveContainer" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.129081 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": container with ID starting with 9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b not found: ID does not exist" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129129 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} err="failed to get container status \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": rpc error: code = NotFound desc = could not find container \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": container with ID starting with 9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129161 4775 scope.go:117] "RemoveContainer" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.129684 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": container with ID starting with f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e not found: ID does not exist" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129740 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} err="failed to get container status \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": rpc error: code = NotFound desc = could not find container \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": container with ID starting with f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129771 4775 scope.go:117] "RemoveContainer" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.130106 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": container with ID starting with 50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd not found: ID does not exist" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.130143 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd"} err="failed to get container status \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": rpc error: code = NotFound desc = could not find container \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": container with ID starting with 50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.734517 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fecf032-f999-4138-a4e5-e2673da92749" path="/var/lib/kubelet/pods/7fecf032-f999-4138-a4e5-e2673da92749/volumes" Jan 23 14:28:23 crc kubenswrapper[4775]: I0123 14:28:23.219329 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:28:23 crc kubenswrapper[4775]: I0123 14:28:23.219830 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:28:53 crc kubenswrapper[4775]: I0123 14:28:53.219664 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:28:53 crc kubenswrapper[4775]: I0123 14:28:53.220613 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.218714 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.219603 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.219719 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.220862 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.220956 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" gracePeriod=600 Jan 23 14:29:23 crc kubenswrapper[4775]: E0123 14:29:23.353291 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833741 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" exitCode=0 Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833957 4775 scope.go:117] "RemoveContainer" containerID="a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.835289 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:23 crc kubenswrapper[4775]: E0123 14:29:23.836135 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:38 crc kubenswrapper[4775]: I0123 14:29:38.713987 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:38 crc kubenswrapper[4775]: E0123 14:29:38.715065 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:44 crc kubenswrapper[4775]: I0123 14:29:44.996109 4775 scope.go:117] "RemoveContainer" containerID="a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06" Jan 23 14:29:45 crc kubenswrapper[4775]: I0123 14:29:45.037404 4775 scope.go:117] "RemoveContainer" containerID="e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d" Jan 23 14:29:51 crc kubenswrapper[4775]: I0123 14:29:51.713792 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:51 crc kubenswrapper[4775]: E0123 14:29:51.714532 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.169078 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170283 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170306 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170329 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170344 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170366 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170379 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170639 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.171571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.175254 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.175482 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.187007 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.370791 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.371265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.371556 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473205 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473243 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.475043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.481486 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.502329 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.796844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:01 crc kubenswrapper[4775]: I0123 14:30:01.285441 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239162 4775 generic.go:334] "Generic (PLEG): container finished" podID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerID="29d1ea4fd73c7e0cf4e80e994a13c06aab543f04f718e1659d74ebea4f313156" exitCode=0 Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerDied","Data":"29d1ea4fd73c7e0cf4e80e994a13c06aab543f04f718e1659d74ebea4f313156"} Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239608 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerStarted","Data":"08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95"} Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.610148 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.729665 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.729945 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.730011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.730567 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume" (OuterVolumeSpecName: "config-volume") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.735846 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.738139 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp" (OuterVolumeSpecName: "kube-api-access-nfgvp") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "kube-api-access-nfgvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833921 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833975 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833996 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerDied","Data":"08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95"} Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269728 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269542 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.713568 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:04 crc kubenswrapper[4775]: E0123 14:30:04.714365 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:15 crc kubenswrapper[4775]: I0123 14:30:15.714089 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:15 crc kubenswrapper[4775]: E0123 14:30:15.715200 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:26 crc kubenswrapper[4775]: I0123 14:30:26.714903 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:26 crc kubenswrapper[4775]: E0123 14:30:26.715912 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:40 crc kubenswrapper[4775]: I0123 14:30:40.714340 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:40 crc kubenswrapper[4775]: E0123 14:30:40.715256 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:45 crc kubenswrapper[4775]: I0123 14:30:45.115595 4775 scope.go:117] "RemoveContainer" containerID="2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812" Jan 23 14:30:52 crc kubenswrapper[4775]: I0123 14:30:52.714584 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:52 crc kubenswrapper[4775]: E0123 14:30:52.716007 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.387984 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.395187 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.402264 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.409767 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588272 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:04 crc kubenswrapper[4775]: E0123 14:31:04.588557 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588572 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588730 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.589234 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.608641 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.608941 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" containerID="cri-o://f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.609019 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.656435 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.657630 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.667479 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.667859 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.680781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.713880 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.742876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.763079 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.763267 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769651 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769708 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.786136 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.808934 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.809438 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" containerID="cri-o://92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.809949 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" containerID="cri-o://19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.824585 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.825656 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.853351 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.867864 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.868107 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.870939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.870989 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.871040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.871075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.872558 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.873967 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.877128 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.881454 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.881617 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.891061 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.922833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.928364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.939181 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b50fc49-3582-416c-9b89-0de07e733931" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" exitCode=143 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.939222 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.972287 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.972341 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.976302 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.073835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.074081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.074954 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.100344 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.165875 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.218192 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.391302 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.543929 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.129:6080/vnc_lite.html\": dial tcp 10.217.0.129:6080: connect: connection refused" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.597722 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.736253 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" path="/var/lib/kubelet/pods/470fdecf-a054-4735-90e9-82e8f2df7393/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.737022 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" path="/var/lib/kubelet/pods/5c069034-d3fc-478b-a45d-2d6c64baf640/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.737545 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" path="/var/lib/kubelet/pods/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.738087 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" path="/var/lib/kubelet/pods/e4b500f0-4005-40b9-a54d-0769cc8717f0/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.743249 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.893793 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.956307 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerStarted","Data":"799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.956385 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerStarted","Data":"0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961181 4775 generic.go:334] "Generic (PLEG): container finished" podID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerID="4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643" exitCode=0 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961286 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerDied","Data":"4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerStarted","Data":"c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963587 4775 generic.go:334] "Generic (PLEG): container finished" podID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" exitCode=0 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerDied","Data":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963680 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963697 4775 scope.go:117] "RemoveContainer" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963686 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerDied","Data":"f3a42cea8fd58140cfe12473c775a1de35761c7ed3cab47b52b03cbea0efb84b"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.971370 4775 generic.go:334] "Generic (PLEG): container finished" podID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" exitCode=143 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.971486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.972988 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" podStartSLOduration=1.972975326 podStartE2EDuration="1.972975326s" podCreationTimestamp="2026-01-23 14:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:05.970746537 +0000 UTC m=+1612.965575357" watchObservedRunningTime="2026-01-23 14:31:05.972975326 +0000 UTC m=+1612.967804066" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.974074 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerStarted","Data":"56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.987433 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"d6487ecc-f390-4837-8097-15e1b0bc28ac\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.987519 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"d6487ecc-f390-4837-8097-15e1b0bc28ac\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.993304 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd" (OuterVolumeSpecName: "kube-api-access-msdnd") pod "d6487ecc-f390-4837-8097-15e1b0bc28ac" (UID: "d6487ecc-f390-4837-8097-15e1b0bc28ac"). InnerVolumeSpecName "kube-api-access-msdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.017023 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data" (OuterVolumeSpecName: "config-data") pod "d6487ecc-f390-4837-8097-15e1b0bc28ac" (UID: "d6487ecc-f390-4837-8097-15e1b0bc28ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.074169 4775 scope.go:117] "RemoveContainer" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.074926 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": container with ID starting with e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a not found: ID does not exist" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.074992 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} err="failed to get container status \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": rpc error: code = NotFound desc = could not find container \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": container with ID starting with e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a not found: ID does not exist" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.090314 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.090344 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.161776 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.163541 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.169079 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.169133 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.332437 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.340279 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.364446 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.496726 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.496973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.501553 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587" (OuterVolumeSpecName: "kube-api-access-pp587") pod "60634ae6-20de-4c41-b4bf-0fceda1df7e5" (UID: "60634ae6-20de-4c41-b4bf-0fceda1df7e5"). InnerVolumeSpecName "kube-api-access-pp587". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.527892 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data" (OuterVolumeSpecName: "config-data") pod "60634ae6-20de-4c41-b4bf-0fceda1df7e5" (UID: "60634ae6-20de-4c41-b4bf-0fceda1df7e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.599560 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.599616 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.984690 4775 generic.go:334] "Generic (PLEG): container finished" podID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerID="7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.984746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerDied","Data":"7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.986331 4775 generic.go:334] "Generic (PLEG): container finished" podID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerID="799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.986373 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerDied","Data":"799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988423 4775 generic.go:334] "Generic (PLEG): container finished" podID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988454 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerDied","Data":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988511 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerDied","Data":"d8f1f0f6e7f62499789debda728a77acf84ec6f7e20d7816daa6f9e8b8134f7b"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988534 4775 scope.go:117] "RemoveContainer" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.016098 4775 scope.go:117] "RemoveContainer" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: E0123 14:31:07.018060 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": container with ID starting with 9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e not found: ID does not exist" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.018107 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} err="failed to get container status \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": rpc error: code = NotFound desc = could not find container \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": container with ID starting with 9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e not found: ID does not exist" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.049290 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.057199 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.351748 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.512366 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"74a79494-7611-49ab-9b32-167dbeba6bb6\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.513103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"74a79494-7611-49ab-9b32-167dbeba6bb6\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.513996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74a79494-7611-49ab-9b32-167dbeba6bb6" (UID: "74a79494-7611-49ab-9b32-167dbeba6bb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.521248 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w" (OuterVolumeSpecName: "kube-api-access-5f48w") pod "74a79494-7611-49ab-9b32-167dbeba6bb6" (UID: "74a79494-7611-49ab-9b32-167dbeba6bb6"). InnerVolumeSpecName "kube-api-access-5f48w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.615943 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.615989 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.714299 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:07 crc kubenswrapper[4775]: E0123 14:31:07.714854 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.727163 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" path="/var/lib/kubelet/pods/60634ae6-20de-4c41-b4bf-0fceda1df7e5/volumes" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.728181 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" path="/var/lib/kubelet/pods/d6487ecc-f390-4837-8097-15e1b0bc28ac/volumes" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.999270 4775 generic.go:334] "Generic (PLEG): container finished" podID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerID="0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" exitCode=0 Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.999389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerDied","Data":"0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7"} Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003269 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerDied","Data":"c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852"} Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003316 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003327 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.113054 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.226575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.230008 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.252182 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566" (OuterVolumeSpecName: "kube-api-access-qk566") pod "5c5ea649-3ec6-4684-a543-92cbb2561c2c" (UID: "5c5ea649-3ec6-4684-a543-92cbb2561c2c"). InnerVolumeSpecName "kube-api-access-qk566". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.267144 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data" (OuterVolumeSpecName: "config-data") pod "5c5ea649-3ec6-4684-a543-92cbb2561c2c" (UID: "5c5ea649-3ec6-4684-a543-92cbb2561c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.332176 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.332210 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.446402 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.536327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.536376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.537016 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2868ba1d-ce52-4e16-b1a5-f8a699c07b94" (UID: "2868ba1d-ce52-4e16-b1a5-f8a699c07b94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.540739 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn" (OuterVolumeSpecName: "kube-api-access-b2cdn") pod "2868ba1d-ce52-4e16-b1a5-f8a699c07b94" (UID: "2868ba1d-ce52-4e16-b1a5-f8a699c07b94"). InnerVolumeSpecName "kube-api-access-b2cdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.574943 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.605406 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.648127 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.648168 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749640 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749752 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749823 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"e62166aa-4f54-4eb0-aae1-69113a424df6\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749862 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749934 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"e62166aa-4f54-4eb0-aae1-69113a424df6\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.750443 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62166aa-4f54-4eb0-aae1-69113a424df6" (UID: "e62166aa-4f54-4eb0-aae1-69113a424df6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.750533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs" (OuterVolumeSpecName: "logs") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.753453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv" (OuterVolumeSpecName: "kube-api-access-nxwzv") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "kube-api-access-nxwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.756374 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq" (OuterVolumeSpecName: "kube-api-access-wxlnq") pod "e62166aa-4f54-4eb0-aae1-69113a424df6" (UID: "e62166aa-4f54-4eb0-aae1-69113a424df6"). InnerVolumeSpecName "kube-api-access-wxlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.775514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data" (OuterVolumeSpecName: "config-data") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.776028 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851516 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851551 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851564 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851575 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851587 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953264 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953539 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953596 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.954730 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs" (OuterVolumeSpecName: "logs") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.958228 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb" (OuterVolumeSpecName: "kube-api-access-vgxwb") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "kube-api-access-vgxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.996640 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data" (OuterVolumeSpecName: "config-data") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018018 4775 generic.go:334] "Generic (PLEG): container finished" podID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" exitCode=0 Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018138 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018873 4775 scope.go:117] "RemoveContainer" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023136 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b50fc49-3582-416c-9b89-0de07e733931" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" exitCode=0 Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023272 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"0da722dd90642caf85fa0f11331565aec51183c8f53f1cf43b2602bc06530edf"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023570 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029230 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerDied","Data":"56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029294 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029309 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.031494 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerDied","Data":"aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.031525 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035361 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerDied","Data":"0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035443 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035550 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055909 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055942 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055956 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.107039 4775 scope.go:117] "RemoveContainer" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.132946 4775 scope.go:117] "RemoveContainer" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.134466 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": container with ID starting with 19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6 not found: ID does not exist" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.134551 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} err="failed to get container status \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": rpc error: code = NotFound desc = could not find container \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": container with ID starting with 19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6 not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.134587 4775 scope.go:117] "RemoveContainer" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.135297 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": container with ID starting with 92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d not found: ID does not exist" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.135351 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} err="failed to get container status \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": rpc error: code = NotFound desc = could not find container \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": container with ID starting with 92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.135405 4775 scope.go:117] "RemoveContainer" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.138488 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.150839 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.164250 4775 scope.go:117] "RemoveContainer" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.166377 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.174920 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.183322 4775 scope.go:117] "RemoveContainer" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.183659 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.184018 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": container with ID starting with 3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058 not found: ID does not exist" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184172 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} err="failed to get container status \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": rpc error: code = NotFound desc = could not find container \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": container with ID starting with 3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058 not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184335 4775 scope.go:117] "RemoveContainer" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.184863 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": container with ID starting with f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e not found: ID does not exist" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184930 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} err="failed to get container status \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": rpc error: code = NotFound desc = could not find container \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": container with ID starting with f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184974 4775 scope.go:117] "RemoveContainer" containerID="0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.191801 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.697037 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.704791 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.731246 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b50fc49-3582-416c-9b89-0de07e733931" path="/var/lib/kubelet/pods/1b50fc49-3582-416c-9b89-0de07e733931/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.732279 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" path="/var/lib/kubelet/pods/40c54c9a-246a-4dab-af73-779d4d8539e4/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.733353 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" path="/var/lib/kubelet/pods/5c5ea649-3ec6-4684-a543-92cbb2561c2c/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.734779 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" path="/var/lib/kubelet/pods/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.735656 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.735685 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.742929 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.749683 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.816062 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.826530 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.842413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.853274 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.862240 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.871732 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.163308 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.165543 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.167396 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.167461 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.735227 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" path="/var/lib/kubelet/pods/26928cf5-7a29-4fab-a501-5746726fc42a/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.736657 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" path="/var/lib/kubelet/pods/2868ba1d-ce52-4e16-b1a5-f8a699c07b94/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.738051 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5980f4a0-814a-4f66-b637-80071a62061b" path="/var/lib/kubelet/pods/5980f4a0-814a-4f66-b637-80071a62061b/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.739292 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" path="/var/lib/kubelet/pods/74a79494-7611-49ab-9b32-167dbeba6bb6/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.741589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" path="/var/lib/kubelet/pods/cce1ea66-c6e5-41e7-b0fc-f915fab736f9/volumes" Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.577985 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.595711 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.606110 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.618227 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.629553 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.635155 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.026842 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.099217 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"3e96bb87-5923-457f-bf02-51a1182e90bc\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.100164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"3e96bb87-5923-457f-bf02-51a1182e90bc\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101532 4775 generic.go:334] "Generic (PLEG): container finished" podID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" exitCode=0 Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101577 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerDied","Data":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101604 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerDied","Data":"5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460"} Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101620 4775 scope.go:117] "RemoveContainer" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101736 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.105275 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np" (OuterVolumeSpecName: "kube-api-access-pj4np") pod "3e96bb87-5923-457f-bf02-51a1182e90bc" (UID: "3e96bb87-5923-457f-bf02-51a1182e90bc"). InnerVolumeSpecName "kube-api-access-pj4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.138946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data" (OuterVolumeSpecName: "config-data") pod "3e96bb87-5923-457f-bf02-51a1182e90bc" (UID: "3e96bb87-5923-457f-bf02-51a1182e90bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.142264 4775 scope.go:117] "RemoveContainer" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: E0123 14:31:15.142966 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": container with ID starting with ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3 not found: ID does not exist" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.143006 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} err="failed to get container status \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": rpc error: code = NotFound desc = could not find container \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": container with ID starting with ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3 not found: ID does not exist" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.203625 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.203662 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.431982 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.437889 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.731636 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" path="/var/lib/kubelet/pods/3e96bb87-5923-457f-bf02-51a1182e90bc/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.732743 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" path="/var/lib/kubelet/pods/9ed8da8c-1d52-44a3-b1c8-b68000003d91/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.734047 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" path="/var/lib/kubelet/pods/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.735010 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" path="/var/lib/kubelet/pods/e62166aa-4f54-4eb0-aae1-69113a424df6/volumes" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.015975 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016577 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016591 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016601 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016610 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016626 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016635 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016656 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016664 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016676 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016684 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016701 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016710 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016724 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016732 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016748 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016756 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016768 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016776 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016790 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016816 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016832 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016840 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017017 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017036 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017050 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017061 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017074 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017087 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017101 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017113 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017125 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017136 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017150 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017730 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.032901 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.113851 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.114718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.123323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.140439 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.140703 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242500 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242636 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242680 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.243726 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.258671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.344632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.344701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.345427 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.351147 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.359096 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.432605 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.676973 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.678451 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.680437 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.701064 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.702705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.704614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.739822 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.741270 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.745636 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.745712 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.748684 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.748875 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.777095 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.794997 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.826896 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852473 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852536 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852667 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852751 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852830 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954839 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954929 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954954 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955019 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955098 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955921 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.956739 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.960459 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.977538 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.978480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.979386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.980961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.004052 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.034637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.083389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.109757 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.184816 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.224878 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.243150 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b494b92_3cd1_4b60_853c_a135bb158d8c.slice/crio-47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c WatchSource:0}: Error finding container 47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c: Status 404 returned error can't find the container with id 47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.503889 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.509993 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe262ed_6f79_4dad_91c6_168b164a6459.slice/crio-751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c WatchSource:0}: Error finding container 751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c: Status 404 returned error can't find the container with id 751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.563470 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.566213 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod500dfca1_a7c0_488c_89ba_2d750245e322.slice/crio-da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d WatchSource:0}: Error finding container da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d: Status 404 returned error can't find the container with id da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.639437 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.645417 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.737943 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9857104_b2d2_4b42_a96d_2f9f1fadc406.slice/crio-04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05 WatchSource:0}: Error finding container 04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05: Status 404 returned error can't find the container with id 04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05 Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.739023 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68223c6c_51af_4369_87c2_368ffe71edb7.slice/crio-0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121 WatchSource:0}: Error finding container 0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121: Status 404 returned error can't find the container with id 0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142309 4775 generic.go:334] "Generic (PLEG): container finished" podID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerID="6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142422 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerDied","Data":"6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142462 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerStarted","Data":"04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.144874 4775 generic.go:334] "Generic (PLEG): container finished" podID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerID="ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.144975 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerDied","Data":"ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.145014 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerStarted","Data":"751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147611 4775 generic.go:334] "Generic (PLEG): container finished" podID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerID="33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147700 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerDied","Data":"33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147728 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerStarted","Data":"47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149725 4775 generic.go:334] "Generic (PLEG): container finished" podID="500dfca1-a7c0-488c-89ba-2d750245e322" containerID="46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149853 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerDied","Data":"46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149893 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerStarted","Data":"da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151629 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerID="36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerDied","Data":"36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerStarted","Data":"fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.153935 4775 generic.go:334] "Generic (PLEG): container finished" podID="68223c6c-51af-4369-87c2-368ffe71edb7" containerID="552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.153986 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerDied","Data":"552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.154017 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerStarted","Data":"0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.714691 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:19 crc kubenswrapper[4775]: E0123 14:31:19.715203 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.663657 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.799126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"500dfca1-a7c0-488c-89ba-2d750245e322\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.799470 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"500dfca1-a7c0-488c-89ba-2d750245e322\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.800115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "500dfca1-a7c0-488c-89ba-2d750245e322" (UID: "500dfca1-a7c0-488c-89ba-2d750245e322"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.804590 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f" (OuterVolumeSpecName: "kube-api-access-mr54f") pod "500dfca1-a7c0-488c-89ba-2d750245e322" (UID: "500dfca1-a7c0-488c-89ba-2d750245e322"). InnerVolumeSpecName "kube-api-access-mr54f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.858597 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.862726 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.867867 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.880238 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.884276 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.901847 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.901877 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.003222 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"5b494b92-3cd1-4b60-853c-a135bb158d8c\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.004046 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b494b92-3cd1-4b60-853c-a135bb158d8c" (UID: "5b494b92-3cd1-4b60-853c-a135bb158d8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.004313 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005022 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"5b494b92-3cd1-4b60-853c-a135bb158d8c\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005054 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9857104-b2d2-4b42-a96d-2f9f1fadc406" (UID: "a9857104-b2d2-4b42-a96d-2f9f1fadc406"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005123 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"ffe262ed-6f79-4dad-91c6-168b164a6459\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005253 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"68223c6c-51af-4369-87c2-368ffe71edb7\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005282 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005317 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"ffe262ed-6f79-4dad-91c6-168b164a6459\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"68223c6c-51af-4369-87c2-368ffe71edb7\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005408 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005892 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffe262ed-6f79-4dad-91c6-168b164a6459" (UID: "ffe262ed-6f79-4dad-91c6-168b164a6459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005958 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005972 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.006112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a0d129e-9a65-484c-b8a6-ca5a0120d95d" (UID: "5a0d129e-9a65-484c-b8a6-ca5a0120d95d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.006357 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68223c6c-51af-4369-87c2-368ffe71edb7" (UID: "68223c6c-51af-4369-87c2-368ffe71edb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.007620 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8" (OuterVolumeSpecName: "kube-api-access-9bmj8") pod "5b494b92-3cd1-4b60-853c-a135bb158d8c" (UID: "5b494b92-3cd1-4b60-853c-a135bb158d8c"). InnerVolumeSpecName "kube-api-access-9bmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.008100 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp" (OuterVolumeSpecName: "kube-api-access-8gvsp") pod "5a0d129e-9a65-484c-b8a6-ca5a0120d95d" (UID: "5a0d129e-9a65-484c-b8a6-ca5a0120d95d"). InnerVolumeSpecName "kube-api-access-8gvsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.010240 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb" (OuterVolumeSpecName: "kube-api-access-l6tpb") pod "ffe262ed-6f79-4dad-91c6-168b164a6459" (UID: "ffe262ed-6f79-4dad-91c6-168b164a6459"). InnerVolumeSpecName "kube-api-access-l6tpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.010742 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl" (OuterVolumeSpecName: "kube-api-access-qmlbl") pod "68223c6c-51af-4369-87c2-368ffe71edb7" (UID: "68223c6c-51af-4369-87c2-368ffe71edb7"). InnerVolumeSpecName "kube-api-access-qmlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.012909 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj" (OuterVolumeSpecName: "kube-api-access-dmrfj") pod "a9857104-b2d2-4b42-a96d-2f9f1fadc406" (UID: "a9857104-b2d2-4b42-a96d-2f9f1fadc406"). InnerVolumeSpecName "kube-api-access-dmrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107211 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107246 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107272 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107284 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107297 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107308 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107319 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107329 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181109 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerDied","Data":"47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181186 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.183905 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.183896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerDied","Data":"da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.184096 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerDied","Data":"fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186260 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188093 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerDied","Data":"0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188142 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188156 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190120 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerDied","Data":"04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190258 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerDied","Data":"751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192351 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192384 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.668610 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669086 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669107 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669128 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669140 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669175 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669187 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669218 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669238 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669250 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669277 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669291 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669525 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669551 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669569 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669590 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669610 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669631 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.670418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.672606 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.677706 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.678340 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-v6hs6" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.694473 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838505 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.946223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.946852 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.971194 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:23 crc kubenswrapper[4775]: I0123 14:31:23.030419 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:23 crc kubenswrapper[4775]: I0123 14:31:23.534344 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:23 crc kubenswrapper[4775]: W0123 14:31:23.550109 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb47b9373_0dd5_4635_a8f9_06aa0fc60174.slice/crio-791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a WatchSource:0}: Error finding container 791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a: Status 404 returned error can't find the container with id 791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.233907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerStarted","Data":"2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f"} Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.234444 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerStarted","Data":"791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a"} Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.258618 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" podStartSLOduration=2.258566709 podStartE2EDuration="2.258566709s" podCreationTimestamp="2026-01-23 14:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:24.256873314 +0000 UTC m=+1631.251702134" watchObservedRunningTime="2026-01-23 14:31:24.258566709 +0000 UTC m=+1631.253395469" Jan 23 14:31:28 crc kubenswrapper[4775]: I0123 14:31:28.292244 4775 generic.go:334] "Generic (PLEG): container finished" podID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerID="2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f" exitCode=0 Jan 23 14:31:28 crc kubenswrapper[4775]: I0123 14:31:28.292713 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerDied","Data":"2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f"} Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.729599 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875354 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875444 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875475 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.880698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr" (OuterVolumeSpecName: "kube-api-access-5ctxr") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "kube-api-access-5ctxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.883106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts" (OuterVolumeSpecName: "scripts") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.919798 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data" (OuterVolumeSpecName: "config-data") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977915 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977968 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977992 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerDied","Data":"791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a"} Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319351 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319509 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.421677 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:30 crc kubenswrapper[4775]: E0123 14:31:30.422185 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.422215 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.422485 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.423322 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.430428 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.431428 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-v6hs6" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.440017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.491117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.491256 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.592738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.592889 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.600214 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.629687 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.744700 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:31 crc kubenswrapper[4775]: I0123 14:31:31.239934 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:31 crc kubenswrapper[4775]: I0123 14:31:31.328919 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerStarted","Data":"b44ad7319eff2652d4ad8fadab672eed48adfae26f3c8e4cc8c6eb5f3b5d2bc0"} Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.347783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerStarted","Data":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.349474 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.383616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.383597307 podStartE2EDuration="2.383597307s" podCreationTimestamp="2026-01-23 14:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:32.378331526 +0000 UTC m=+1639.373160266" watchObservedRunningTime="2026-01-23 14:31:32.383597307 +0000 UTC m=+1639.378426047" Jan 23 14:31:34 crc kubenswrapper[4775]: I0123 14:31:34.714320 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:34 crc kubenswrapper[4775]: E0123 14:31:34.715157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:40 crc kubenswrapper[4775]: I0123 14:31:40.788628 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.293046 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.294887 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.298161 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.298589 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.310608 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.405852 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.406105 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.406206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.507884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.507967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.508026 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.515079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.516211 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.546272 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.547668 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.551051 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.551790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.563588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.601029 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.602438 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.605407 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.614339 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.634986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.650255 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.651524 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.656441 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.672871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.689336 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.696417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.699932 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710411 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710452 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710483 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.731320 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813620 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813757 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813788 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813965 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813980 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814898 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.826372 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.826531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.830087 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.830167 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915599 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915649 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.916541 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.918996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.919755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.930591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.931492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.967121 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.035267 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.042449 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.056506 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.095966 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.136952 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf751d2a1_4497_4fb2_9c13_af54db584a48.slice/crio-f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1 WatchSource:0}: Error finding container f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1: Status 404 returned error can't find the container with id f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.193248 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.194431 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.196627 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.197159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.211420 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323277 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323585 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.408355 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d46934b_df3e_4beb_b74c_0c4c0d568ec4.slice/crio-b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023 WatchSource:0}: Error finding container b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023: Status 404 returned error can't find the container with id b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.409718 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425441 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.429281 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.434335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.447006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.468215 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerStarted","Data":"b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.469632 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerStarted","Data":"f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.469656 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerStarted","Data":"f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.484279 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" podStartSLOduration=1.48426462 podStartE2EDuration="1.48426462s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:42.481320331 +0000 UTC m=+1649.476149061" watchObservedRunningTime="2026-01-23 14:31:42.48426462 +0000 UTC m=+1649.479093360" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.519173 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.542192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.558992 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbfaeee_c2d4_472c_a3da_5e055c5ecf08.slice/crio-4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2 WatchSource:0}: Error finding container 4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2: Status 404 returned error can't find the container with id 4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.625943 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.634893 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a2ad7dd_d80c_4eb4_8531_c2a8208bb760.slice/crio-73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81 WatchSource:0}: Error finding container 73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81: Status 404 returned error can't find the container with id 73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.709857 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.973916 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.981287 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4701d5c_309d_4969_852b_83626330e0df.slice/crio-6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae WatchSource:0}: Error finding container 6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae: Status 404 returned error can't find the container with id 6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.485870 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerStarted","Data":"e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.492684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerStarted","Data":"827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.492732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerStarted","Data":"6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497923 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497950 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502931 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"4a5f991b7499aef449c7bccc5f57357c23ade00d8e943dce54d385ab79061ebd"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.509911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerStarted","Data":"b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.509955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerStarted","Data":"73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.537611 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.537584487 podStartE2EDuration="2.537584487s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.499887739 +0000 UTC m=+1650.494716479" watchObservedRunningTime="2026-01-23 14:31:43.537584487 +0000 UTC m=+1650.532413227" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.549175 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.549154706 podStartE2EDuration="2.549154706s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.537567966 +0000 UTC m=+1650.532396716" watchObservedRunningTime="2026-01-23 14:31:43.549154706 +0000 UTC m=+1650.543983456" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.562837 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" podStartSLOduration=1.562820211 podStartE2EDuration="1.562820211s" podCreationTimestamp="2026-01-23 14:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.562334008 +0000 UTC m=+1650.557162768" watchObservedRunningTime="2026-01-23 14:31:43.562820211 +0000 UTC m=+1650.557648951" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.602696 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.6026742670000003 podStartE2EDuration="2.602674267s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.587589854 +0000 UTC m=+1650.582418634" watchObservedRunningTime="2026-01-23 14:31:43.602674267 +0000 UTC m=+1650.597503017" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.606633 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.606615442 podStartE2EDuration="2.606615442s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.605259746 +0000 UTC m=+1650.600088486" watchObservedRunningTime="2026-01-23 14:31:43.606615442 +0000 UTC m=+1650.601444182" Jan 23 14:31:45 crc kubenswrapper[4775]: I0123 14:31:45.528575 4775 generic.go:334] "Generic (PLEG): container finished" podID="c4701d5c-309d-4969-852b-83626330e0df" containerID="827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851" exitCode=0 Jan 23 14:31:45 crc kubenswrapper[4775]: I0123 14:31:45.528671 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerDied","Data":"827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851"} Jan 23 14:31:46 crc kubenswrapper[4775]: I0123 14:31:46.967509 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:46 crc kubenswrapper[4775]: I0123 14:31:46.967896 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.035906 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.036411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.057085 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106378 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.112367 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8" (OuterVolumeSpecName: "kube-api-access-hnnd8") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "kube-api-access-hnnd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.112742 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts" (OuterVolumeSpecName: "scripts") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.138476 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data" (OuterVolumeSpecName: "config-data") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209126 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209182 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209204 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.553751 4775 generic.go:334] "Generic (PLEG): container finished" podID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerID="f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3" exitCode=0 Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.553897 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerDied","Data":"f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3"} Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.555917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerDied","Data":"6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae"} Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.555987 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.556161 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.099707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:48 crc kubenswrapper[4775]: E0123 14:31:48.100364 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.100388 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.100774 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.101684 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.104452 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.112542 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.228383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.228612 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.330947 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.331186 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.350180 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.365131 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.435213 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.882007 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.903135 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041333 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041477 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041520 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.044598 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts" (OuterVolumeSpecName: "scripts") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.045714 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j" (OuterVolumeSpecName: "kube-api-access-z2q2j") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "kube-api-access-z2q2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.077177 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data" (OuterVolumeSpecName: "config-data") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143604 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143666 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143687 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerDied","Data":"f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589179 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589093 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.591508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerStarted","Data":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.591554 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerStarted","Data":"004f895311337c942728dd641397c9a9477c224ca4d5348fe186974622dce3f9"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.594970 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.632077 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.632049224 podStartE2EDuration="1.632049224s" podCreationTimestamp="2026-01-23 14:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:49.623576788 +0000 UTC m=+1656.618405528" watchObservedRunningTime="2026-01-23 14:31:49.632049224 +0000 UTC m=+1656.626878004" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.714261 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:49 crc kubenswrapper[4775]: E0123 14:31:49.714876 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788066 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788245 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" containerID="cri-o://9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788370 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" containerID="cri-o://e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.834539 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.834756 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936145 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936392 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" containerID="cri-o://bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936948 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" gracePeriod=30 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.292143 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.365979 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366080 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366144 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs" (OuterVolumeSpecName: "logs") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.386713 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6" (OuterVolumeSpecName: "kube-api-access-tdnd6") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "kube-api-access-tdnd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.389754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data" (OuterVolumeSpecName: "config-data") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469885 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469926 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469939 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.535922 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611430 4775 generic.go:334] "Generic (PLEG): container finished" podID="08da1273-e72a-44f8-82d2-adf17cee8644" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" exitCode=0 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611473 4775 generic.go:334] "Generic (PLEG): container finished" podID="08da1273-e72a-44f8-82d2-adf17cee8644" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" exitCode=143 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611488 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611563 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"4a5f991b7499aef449c7bccc5f57357c23ade00d8e943dce54d385ab79061ebd"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611584 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611587 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616062 4775 generic.go:334] "Generic (PLEG): container finished" podID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" exitCode=0 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616095 4775 generic.go:334] "Generic (PLEG): container finished" podID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" exitCode=143 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616176 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616190 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.659117 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.668961 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677194 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677445 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677583 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.679666 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs" (OuterVolumeSpecName: "logs") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.691871 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.692113 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr" (OuterVolumeSpecName: "kube-api-access-5j8cr") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "kube-api-access-5j8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.694500 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.694653 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} err="failed to get container status \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.694762 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.695304 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.695404 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} err="failed to get container status \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.695446 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.696266 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} err="failed to get container status \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.696310 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697008 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} err="failed to get container status \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697035 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697483 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.735137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data" (OuterVolumeSpecName: "config-data") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.735250 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.735994 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736041 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736081 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736100 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736151 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736168 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736199 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736216 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736261 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736278 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736650 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736696 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736726 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736741 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736762 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.738938 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.741529 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.745033 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.759789 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784072 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784121 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784143 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.790990 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.791845 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.791898 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} err="failed to get container status \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.791931 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.793387 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793496 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} err="failed to get container status \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793605 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793950 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} err="failed to get container status \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793976 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.794721 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} err="failed to get container status \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885382 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885689 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885868 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.943925 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.951873 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.971361 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.972908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987313 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987371 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987435 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.993664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.014291 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.019326 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.027129 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.069664 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092672 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.193545 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.193617 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.195997 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.197929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.198701 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.222175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.291780 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.531673 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.626866 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"c0e734d13db605e2174d6c175ee9bc97984c9197667104eb9fbac9e883c62175"} Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.729639 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" path="/var/lib/kubelet/pods/08da1273-e72a-44f8-82d2-adf17cee8644/volumes" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.731793 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" path="/var/lib/kubelet/pods/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08/volumes" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.741580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: W0123 14:31:51.743208 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f88f54_b0df_4938_a185_e104d3da129f.slice/crio-bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799 WatchSource:0}: Error finding container bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799: Status 404 returned error can't find the container with id bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799 Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.056978 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.082347 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.642515 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.642999 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646113 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646159 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646185 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.662363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.672397 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.672373938 podStartE2EDuration="2.672373938s" podCreationTimestamp="2026-01-23 14:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:52.66832464 +0000 UTC m=+1659.663153420" watchObservedRunningTime="2026-01-23 14:31:52.672373938 +0000 UTC m=+1659.667202708" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.712688 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.712653605 podStartE2EDuration="2.712653605s" podCreationTimestamp="2026-01-23 14:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:52.698426545 +0000 UTC m=+1659.693255345" watchObservedRunningTime="2026-01-23 14:31:52.712653605 +0000 UTC m=+1659.707482385" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.667069 4775 generic.go:334] "Generic (PLEG): container finished" podID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerID="e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" exitCode=0 Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.667197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerDied","Data":"e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d"} Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.822581 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.843491 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.843690 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.856618 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws" (OuterVolumeSpecName: "kube-api-access-7zdws") pod "3d46934b-df3e-4beb-b74c-0c4c0d568ec4" (UID: "3d46934b-df3e-4beb-b74c-0c4c0d568ec4"). InnerVolumeSpecName "kube-api-access-7zdws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.888568 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data" (OuterVolumeSpecName: "config-data") pod "3d46934b-df3e-4beb-b74c-0c4c0d568ec4" (UID: "3d46934b-df3e-4beb-b74c-0c4c0d568ec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.946105 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.946167 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerDied","Data":"b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023"} Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687228 4775 scope.go:117] "RemoveContainer" containerID="e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687240 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.751903 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.772177 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.781125 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: E0123 14:31:54.791206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.791249 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.791518 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.792225 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.792438 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.794792 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.869072 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.869221 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.970893 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.970995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.976668 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.991371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.117178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:55 crc kubenswrapper[4775]: W0123 14:31:55.730992 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76b301e2_214f_47ac_99b1_2cc76488c253.slice/crio-c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0 WatchSource:0}: Error finding container c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0: Status 404 returned error can't find the container with id c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0 Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.731674 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" path="/var/lib/kubelet/pods/3d46934b-df3e-4beb-b74c-0c4c0d568ec4/volumes" Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.732998 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.292359 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.294039 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.739007 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerStarted","Data":"3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32"} Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.739075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerStarted","Data":"c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0"} Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.766792 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.76677014 podStartE2EDuration="2.76677014s" podCreationTimestamp="2026-01-23 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:56.764084418 +0000 UTC m=+1663.758913218" watchObservedRunningTime="2026-01-23 14:31:56.76677014 +0000 UTC m=+1663.761598890" Jan 23 14:31:58 crc kubenswrapper[4775]: I0123 14:31:58.479918 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.102243 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.103462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.106017 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.106721 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.123488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.246945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.247038 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.247109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349717 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.359915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.360124 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.382331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.438334 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.925519 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: W0123 14:31:59.926397 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9f9b55_ea71_4396_82bf_2a49788ccc42.slice/crio-8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f WatchSource:0}: Error finding container 8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f: Status 404 returned error can't find the container with id 8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.119960 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.788465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerStarted","Data":"af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a"} Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.788527 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerStarted","Data":"8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f"} Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.828306 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" podStartSLOduration=1.828277971 podStartE2EDuration="1.828277971s" podCreationTimestamp="2026-01-23 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:00.819143097 +0000 UTC m=+1667.813971877" watchObservedRunningTime="2026-01-23 14:32:00.828277971 +0000 UTC m=+1667.823106751" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.070681 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.072496 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.292692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.292748 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.112075 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.153107 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.375070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.375011 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.713944 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:02 crc kubenswrapper[4775]: E0123 14:32:02.714176 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:04 crc kubenswrapper[4775]: I0123 14:32:04.827551 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerID="af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a" exitCode=0 Jan 23 14:32:04 crc kubenswrapper[4775]: I0123 14:32:04.827620 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerDied","Data":"af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a"} Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.118279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.154173 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.894296 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.315115 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480856 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480976 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.490130 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts" (OuterVolumeSpecName: "scripts") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.503365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv" (OuterVolumeSpecName: "kube-api-access-g4vcv") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "kube-api-access-g4vcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.531082 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data" (OuterVolumeSpecName: "config-data") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583290 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583331 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583344 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853614 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853597 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerDied","Data":"8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f"} Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853740 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f" Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060105 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060382 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" containerID="cri-o://ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060862 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" containerID="cri-o://103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.080107 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142250 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142459 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" containerID="cri-o://42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142604 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.870683 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" exitCode=143 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.870836 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874314 4775 generic.go:334] "Generic (PLEG): container finished" podID="09f88f54-b0df-4938-a185-e104d3da129f" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" exitCode=143 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874424 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874609 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" gracePeriod=30 Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.119842 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.122741 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.124609 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.124695 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.650369 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667459 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667617 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667674 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.668880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs" (OuterVolumeSpecName: "logs") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.676633 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7" (OuterVolumeSpecName: "kube-api-access-tbbc7") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "kube-api-access-tbbc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.696394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data" (OuterVolumeSpecName: "config-data") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.746853 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771088 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771222 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772252 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772286 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772300 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.773785 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs" (OuterVolumeSpecName: "logs") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.777125 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g" (OuterVolumeSpecName: "kube-api-access-g8k7g") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "kube-api-access-g8k7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.807313 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data" (OuterVolumeSpecName: "config-data") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876308 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876361 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876385 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914183 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" exitCode=0 Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914270 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914301 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"c0e734d13db605e2174d6c175ee9bc97984c9197667104eb9fbac9e883c62175"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914370 4775 scope.go:117] "RemoveContainer" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918917 4775 generic.go:334] "Generic (PLEG): container finished" podID="09f88f54-b0df-4938-a185-e104d3da129f" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" exitCode=0 Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918967 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.919025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.955112 4775 scope.go:117] "RemoveContainer" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.983878 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.996859 4775 scope.go:117] "RemoveContainer" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.997640 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": container with ID starting with 103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470 not found: ID does not exist" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997707 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} err="failed to get container status \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": rpc error: code = NotFound desc = could not find container \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": container with ID starting with 103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470 not found: ID does not exist" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997763 4775 scope.go:117] "RemoveContainer" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997978 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.998544 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": container with ID starting with ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2 not found: ID does not exist" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.998588 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} err="failed to get container status \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": rpc error: code = NotFound desc = could not find container \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": container with ID starting with ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2 not found: ID does not exist" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.998616 4775 scope.go:117] "RemoveContainer" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.022953 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.038945 4775 scope.go:117] "RemoveContainer" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.047171 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.054433 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.054950 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.054980 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055000 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055014 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055041 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055055 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055101 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055136 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055148 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055450 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055475 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055499 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055536 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055549 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.056903 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.059613 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.065343 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.068637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.075103 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.078023 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.086061 4775 scope.go:117] "RemoveContainer" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087735 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.087796 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": container with ID starting with 911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54 not found: ID does not exist" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087856 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} err="failed to get container status \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": rpc error: code = NotFound desc = could not find container \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": container with ID starting with 911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54 not found: ID does not exist" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087887 4775 scope.go:117] "RemoveContainer" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.089499 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": container with ID starting with 42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36 not found: ID does not exist" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.089927 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} err="failed to get container status \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": rpc error: code = NotFound desc = could not find container \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": container with ID starting with 42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36 not found: ID does not exist" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.188571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.188901 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189002 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189044 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189104 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189263 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189966 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.194578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.217771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291329 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291618 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.292107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.297570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.320755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.379670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.396007 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.730385 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f88f54-b0df-4938-a185-e104d3da129f" path="/var/lib/kubelet/pods/09f88f54-b0df-4938-a185-e104d3da129f/volumes" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.731194 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" path="/var/lib/kubelet/pods/0d3bad13-3a3b-481d-bdf4-b489422eb398/volumes" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.855999 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: W0123 14:32:11.899734 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08cc29e8_1d83_4f1e_b343_a813a06c7f5a.slice/crio-c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f WatchSource:0}: Error finding container c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f: Status 404 returned error can't find the container with id c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.949065 4775 generic.go:334] "Generic (PLEG): container finished" podID="76b301e2-214f-47ac-99b1-2cc76488c253" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" exitCode=0 Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.949120 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerDied","Data":"3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32"} Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.963904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f"} Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.998714 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.251368 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.316189 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"76b301e2-214f-47ac-99b1-2cc76488c253\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.316404 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"76b301e2-214f-47ac-99b1-2cc76488c253\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.322704 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx" (OuterVolumeSpecName: "kube-api-access-trfqx") pod "76b301e2-214f-47ac-99b1-2cc76488c253" (UID: "76b301e2-214f-47ac-99b1-2cc76488c253"). InnerVolumeSpecName "kube-api-access-trfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.346053 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data" (OuterVolumeSpecName: "config-data") pod "76b301e2-214f-47ac-99b1-2cc76488c253" (UID: "76b301e2-214f-47ac-99b1-2cc76488c253"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.418276 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.418334 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.982734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.982850 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986775 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerDied","Data":"c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986830 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986873 4775 scope.go:117] "RemoveContainer" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.994953 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.995025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.995049 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"5bbd58bc5eb6780b68e8d968266f41a0b7126273d93210d99f32930850e03151"} Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.019534 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.019495626 podStartE2EDuration="3.019495626s" podCreationTimestamp="2026-01-23 14:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:13.014499643 +0000 UTC m=+1680.009328373" watchObservedRunningTime="2026-01-23 14:32:13.019495626 +0000 UTC m=+1680.014324406" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.052778 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.052743055 podStartE2EDuration="3.052743055s" podCreationTimestamp="2026-01-23 14:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:13.050648799 +0000 UTC m=+1680.045477549" watchObservedRunningTime="2026-01-23 14:32:13.052743055 +0000 UTC m=+1680.047571835" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.083031 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.088393 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.124919 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: E0123 14:32:13.125438 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.125461 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.125685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.126470 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.130342 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.131555 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.230941 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.231005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.332979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.333107 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.339021 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.360684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.452382 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.726870 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" path="/var/lib/kubelet/pods/76b301e2-214f-47ac-99b1-2cc76488c253/volumes" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.953880 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: W0123 14:32:13.959624 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaaf7413_398a_4a39_a375_c130187f9726.slice/crio-97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d WatchSource:0}: Error finding container 97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d: Status 404 returned error can't find the container with id 97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d Jan 23 14:32:14 crc kubenswrapper[4775]: I0123 14:32:14.009070 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerStarted","Data":"97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d"} Jan 23 14:32:14 crc kubenswrapper[4775]: I0123 14:32:14.713894 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:14 crc kubenswrapper[4775]: E0123 14:32:14.714120 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:15 crc kubenswrapper[4775]: I0123 14:32:15.020633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerStarted","Data":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} Jan 23 14:32:15 crc kubenswrapper[4775]: I0123 14:32:15.048496 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.048472975 podStartE2EDuration="2.048472975s" podCreationTimestamp="2026-01-23 14:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:15.04269736 +0000 UTC m=+1682.037526130" watchObservedRunningTime="2026-01-23 14:32:15.048472975 +0000 UTC m=+1682.043301755" Jan 23 14:32:16 crc kubenswrapper[4775]: I0123 14:32:16.380611 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:16 crc kubenswrapper[4775]: I0123 14:32:16.380692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:18 crc kubenswrapper[4775]: I0123 14:32:18.453361 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.380703 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.381146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.396896 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.396963 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.504028 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.164:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545038 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545038 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545095 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.164:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:23 crc kubenswrapper[4775]: I0123 14:32:23.453185 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:23 crc kubenswrapper[4775]: I0123 14:32:23.497907 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:24 crc kubenswrapper[4775]: I0123 14:32:24.177832 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:25 crc kubenswrapper[4775]: I0123 14:32:25.714761 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:25 crc kubenswrapper[4775]: E0123 14:32:25.715324 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.389061 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.403664 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.406881 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409112 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409484 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.411373 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.411609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:32 crc kubenswrapper[4775]: I0123 14:32:32.217975 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.624654 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.626241 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.639920 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.641532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644537 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.675926 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.703350 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.745752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746140 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746669 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746982 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.747032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.747067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.756476 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.775940 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848607 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.849468 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.855333 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.877492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.917192 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.919006 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.920899 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.921887 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.934595 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949668 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.965774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.969625 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.008549 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050719 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050784 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050919 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.058429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.058493 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.074281 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.075576 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.262359 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.281545 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.424556 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.428920 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda771c767_804b_4c42_bfc9_e6982acea366.slice/crio-53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3 WatchSource:0}: Error finding container 53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3: Status 404 returned error can't find the container with id 53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3 Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.465874 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.491000 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a307d6_651f_4f43_83ec_6d1e1118f7ad.slice/crio-2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47 WatchSource:0}: Error finding container 2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47: Status 404 returned error can't find the container with id 2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47 Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.597280 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.741681 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.744252 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422f57ad_3c24_4af9_aa50_c17639a07403.slice/crio-13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed WatchSource:0}: Error finding container 13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed: Status 404 returned error can't find the container with id 13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.267704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.268183 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.268201 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerStarted","Data":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270157 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerStarted","Data":"13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270495 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274948 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274978 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerStarted","Data":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278534 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerStarted","Data":"7f633d05d3eeb44bacac1fe7b01d7340207dd706030a31990ae0908b4cb1ede1"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278640 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.298647 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=2.298620623 podStartE2EDuration="2.298620623s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.29317683 +0000 UTC m=+1702.288005590" watchObservedRunningTime="2026-01-23 14:32:35.298620623 +0000 UTC m=+1702.293449363" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.319493 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=2.319473872 podStartE2EDuration="2.319473872s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.309142931 +0000 UTC m=+1702.303971671" watchObservedRunningTime="2026-01-23 14:32:35.319473872 +0000 UTC m=+1702.314302622" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.327639 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.327619792 podStartE2EDuration="2.327619792s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.326529211 +0000 UTC m=+1702.321357971" watchObservedRunningTime="2026-01-23 14:32:35.327619792 +0000 UTC m=+1702.322448532" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.355096 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=2.355076387 podStartE2EDuration="2.355076387s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.349305724 +0000 UTC m=+1702.344134464" watchObservedRunningTime="2026-01-23 14:32:35.355076387 +0000 UTC m=+1702.349905137" Jan 23 14:32:37 crc kubenswrapper[4775]: I0123 14:32:37.714963 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:37 crc kubenswrapper[4775]: E0123 14:32:37.715702 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:39 crc kubenswrapper[4775]: I0123 14:32:39.312795 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:39 crc kubenswrapper[4775]: I0123 14:32:39.334499 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.641278 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.643720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.649026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.650090 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.668512 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.678041 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.771535 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.773343 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.785886 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.791990 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794196 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794696 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.814635 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.828883 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896373 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896452 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896527 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896665 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896752 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.904864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.908376 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.917386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.918325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998380 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998428 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998523 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.999485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.003932 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.004749 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.004936 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.011620 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.026618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.030717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.119084 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.119519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.495945 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.557557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: W0123 14:32:41.560260 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3429d990_e795_4241_bb25_8871be747a75.slice/crio-69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61 WatchSource:0}: Error finding container 69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61: Status 404 returned error can't find the container with id 69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61 Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.630588 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.632313 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.651671 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.653170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.665092 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.683680 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.702586 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.725164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816124 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816528 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.917787 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.917959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.918012 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.918044 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.923223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.923829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.933979 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.943158 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.965449 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.987391 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.344525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.345050 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.345061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"60356a4b31a8069b59d95c548c20dccce95f5173efd6d074a66247e83d02c3f3"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.368520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerStarted","Data":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.368564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerStarted","Data":"69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.372636 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=2.372622554 podStartE2EDuration="2.372622554s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.360384729 +0000 UTC m=+1709.355213479" watchObservedRunningTime="2026-01-23 14:32:42.372622554 +0000 UTC m=+1709.367451294" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.376355 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerStarted","Data":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.376456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerStarted","Data":"b8e794aac4ed73289d855382e742fffe4df1b0a69c82118afa283730ec3b3a07"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381973 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381984 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"915f8179b0a7a6e696f78019ea25b2951e8b526105585716102ad10d5d921fdc"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.385901 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=2.385846787 podStartE2EDuration="2.385846787s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.383142911 +0000 UTC m=+1709.377971651" watchObservedRunningTime="2026-01-23 14:32:42.385846787 +0000 UTC m=+1709.380675527" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.398213 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=2.398197486 podStartE2EDuration="2.398197486s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.395440238 +0000 UTC m=+1709.390268978" watchObservedRunningTime="2026-01-23 14:32:42.398197486 +0000 UTC m=+1709.393026226" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.429995 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.494955 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=2.494921256 podStartE2EDuration="2.494921256s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.43482149 +0000 UTC m=+1709.429650230" watchObservedRunningTime="2026-01-23 14:32:42.494921256 +0000 UTC m=+1709.489750016" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.497442 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.400466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerStarted","Data":"a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.400961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerStarted","Data":"8b77f3af6b0f185a5161fdaa5749b6a9f045ed71bd64a9e9cc6ddd8f8cc700d4"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.403996 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.419263 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerStarted","Data":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.419355 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerStarted","Data":"ff7647e6e88e0b90a60d3df6c11bd8f2d1a96b5234e9ed48e01eca64c74d9d98"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.420478 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.431251 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.431227715 podStartE2EDuration="2.431227715s" podCreationTimestamp="2026-01-23 14:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:43.424240367 +0000 UTC m=+1710.419069147" watchObservedRunningTime="2026-01-23 14:32:43.431227715 +0000 UTC m=+1710.426056485" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.469586 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.469566277 podStartE2EDuration="2.469566277s" podCreationTimestamp="2026-01-23 14:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:43.451311322 +0000 UTC m=+1710.446140092" watchObservedRunningTime="2026-01-23 14:32:43.469566277 +0000 UTC m=+1710.464395027" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.967754 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.968484 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.992206 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.993155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133102 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133152 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133161 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.386497 4775 scope.go:117] "RemoveContainer" containerID="16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.476250 4775 scope.go:117] "RemoveContainer" containerID="50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.520455 4775 scope.go:117] "RemoveContainer" containerID="3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.570289 4775 scope.go:117] "RemoveContainer" containerID="dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.607016 4775 scope.go:117] "RemoveContainer" containerID="cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.643195 4775 scope.go:117] "RemoveContainer" containerID="ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.668732 4775 scope.go:117] "RemoveContainer" containerID="8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.005025 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.012340 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119645 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119703 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119723 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119740 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.005254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.012162 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.050852 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.053407 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.119779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120022 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120093 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120163 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.551760 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.556413 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.715574 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:51 crc kubenswrapper[4775]: E0123 14:32:51.716006 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.018032 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.036055 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.284961 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285076 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285111 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285180 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.973432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.974599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.977230 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.980688 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.001996 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.003472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.006091 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.008552 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.534669 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.534737 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.540377 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.545745 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.123211 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.124120 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.124222 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.125497 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.129346 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.129393 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.130236 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.630253 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.807388 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.807694 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" containerID="cri-o://e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.808232 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" containerID="cri-o://c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829436 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829970 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" containerID="cri-o://9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829988 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" containerID="cri-o://44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.216375 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.216842 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.245333 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.245654 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.649594 4775 generic.go:334] "Generic (PLEG): container finished" podID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" exitCode=143 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.649703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.652134 4775 generic.go:334] "Generic (PLEG): container finished" podID="a771c767-804b-4c42-bfc9-e6982acea366" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" exitCode=143 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.652214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.265226 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.267460 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.269658 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.269711 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.284733 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.287010 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.288957 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.289018 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.714618 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:05 crc kubenswrapper[4775]: E0123 14:33:05.715861 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.988303 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": read tcp 10.217.0.2:54538->10.217.0.167:8774: read: connection reset by peer" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.988906 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": read tcp 10.217.0.2:54542->10.217.0.167:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.031925 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": read tcp 10.217.0.2:45484->10.217.0.166:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.031968 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": read tcp 10.217.0.2:45474->10.217.0.166:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.208961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.213989 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325565 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"93184515-7dbf-4aeb-823f-0146b2a66d39\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325615 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"422f57ad-3c24-4af9-aa50-c17639a07403\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325703 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"93184515-7dbf-4aeb-823f-0146b2a66d39\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325729 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"422f57ad-3c24-4af9-aa50-c17639a07403\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.333118 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj" (OuterVolumeSpecName: "kube-api-access-grnlj") pod "422f57ad-3c24-4af9-aa50-c17639a07403" (UID: "422f57ad-3c24-4af9-aa50-c17639a07403"). InnerVolumeSpecName "kube-api-access-grnlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.337924 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt" (OuterVolumeSpecName: "kube-api-access-985tt") pod "93184515-7dbf-4aeb-823f-0146b2a66d39" (UID: "93184515-7dbf-4aeb-823f-0146b2a66d39"). InnerVolumeSpecName "kube-api-access-985tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.356045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data" (OuterVolumeSpecName: "config-data") pod "422f57ad-3c24-4af9-aa50-c17639a07403" (UID: "422f57ad-3c24-4af9-aa50-c17639a07403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.357649 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data" (OuterVolumeSpecName: "config-data") pod "93184515-7dbf-4aeb-823f-0146b2a66d39" (UID: "93184515-7dbf-4aeb-823f-0146b2a66d39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.374999 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428477 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428542 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428554 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428579 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.454319 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.529579 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.531302 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.531464 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.532371 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs" (OuterVolumeSpecName: "logs") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.535228 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5" (OuterVolumeSpecName: "kube-api-access-45pc5") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "kube-api-access-45pc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.550521 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data" (OuterVolumeSpecName: "config-data") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.632662 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.633531 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.633731 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634257 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634365 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634447 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634762 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs" (OuterVolumeSpecName: "logs") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.635261 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr" (OuterVolumeSpecName: "kube-api-access-ckrlr") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "kube-api-access-ckrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.670094 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data" (OuterVolumeSpecName: "config-data") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685267 4775 generic.go:334] "Generic (PLEG): container finished" podID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685364 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685712 4775 scope.go:117] "RemoveContainer" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.686048 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.686365 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.687982 4775 generic.go:334] "Generic (PLEG): container finished" podID="422f57ad-3c24-4af9-aa50-c17639a07403" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688123 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688164 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerDied","Data":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688607 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerDied","Data":"13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697833 4775 generic.go:334] "Generic (PLEG): container finished" podID="a771c767-804b-4c42-bfc9-e6982acea366" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697890 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697970 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.715354 4775 scope.go:117] "RemoveContainer" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.720953 4775 generic.go:334] "Generic (PLEG): container finished" podID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721001 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerDied","Data":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerDied","Data":"7f633d05d3eeb44bacac1fe7b01d7340207dd706030a31990ae0908b4cb1ede1"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721030 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.735726 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741817 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741849 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741880 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.742414 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.743332 4775 scope.go:117] "RemoveContainer" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.745492 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": container with ID starting with 44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb not found: ID does not exist" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.745525 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} err="failed to get container status \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": rpc error: code = NotFound desc = could not find container \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": container with ID starting with 44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.745574 4775 scope.go:117] "RemoveContainer" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.749432 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": container with ID starting with 9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797 not found: ID does not exist" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.749457 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} err="failed to get container status \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": rpc error: code = NotFound desc = could not find container \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": container with ID starting with 9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.749479 4775 scope.go:117] "RemoveContainer" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.758226 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.767070 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.781113 4775 scope.go:117] "RemoveContainer" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784447 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.784476 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": container with ID starting with 0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1 not found: ID does not exist" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784512 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} err="failed to get container status \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": rpc error: code = NotFound desc = could not find container \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": container with ID starting with 0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784537 4775 scope.go:117] "RemoveContainer" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.794113 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.801998 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.811326 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.816584 4775 scope.go:117] "RemoveContainer" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.876849 4775 scope.go:117] "RemoveContainer" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.877299 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": container with ID starting with c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994 not found: ID does not exist" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877330 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} err="failed to get container status \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": rpc error: code = NotFound desc = could not find container \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": container with ID starting with c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877351 4775 scope.go:117] "RemoveContainer" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.877572 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": container with ID starting with e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399 not found: ID does not exist" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877597 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} err="failed to get container status \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": rpc error: code = NotFound desc = could not find container \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": container with ID starting with e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877611 4775 scope.go:117] "RemoveContainer" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.897008 4775 scope.go:117] "RemoveContainer" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.897336 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": container with ID starting with 55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec not found: ID does not exist" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.897384 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} err="failed to get container status \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": rpc error: code = NotFound desc = could not find container \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": container with ID starting with 55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.959054 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.959254 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" gracePeriod=30 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.969618 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.969839 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.014704 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.017294 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" containerID="cri-o://a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.017697 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040407 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040610 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" containerID="cri-o://8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040742 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.278432 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.278672 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.283774 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.284007 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.732983 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" path="/var/lib/kubelet/pods/422f57ad-3c24-4af9-aa50-c17639a07403/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.733553 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" path="/var/lib/kubelet/pods/93184515-7dbf-4aeb-823f-0146b2a66d39/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.734199 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a771c767-804b-4c42-bfc9-e6982acea366" path="/var/lib/kubelet/pods/a771c767-804b-4c42-bfc9-e6982acea366/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735239 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" exitCode=143 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735404 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" path="/var/lib/kubelet/pods/f3a307d6-651f-4f43-83ec-6d1e1118f7ad/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.739409 4775 generic.go:334] "Generic (PLEG): container finished" podID="93e53da4-e769-460a-b299-07131d928b83" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" exitCode=143 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.739435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.130956 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.211432 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.269159 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"3429d990-e795-4241-bb25-8871be747a75\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.269310 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"3429d990-e795-4241-bb25-8871be747a75\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.276932 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk" (OuterVolumeSpecName: "kube-api-access-v4ghk") pod "3429d990-e795-4241-bb25-8871be747a75" (UID: "3429d990-e795-4241-bb25-8871be747a75"). InnerVolumeSpecName "kube-api-access-v4ghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.289757 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data" (OuterVolumeSpecName: "config-data") pod "3429d990-e795-4241-bb25-8871be747a75" (UID: "3429d990-e795-4241-bb25-8871be747a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.371051 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"ec05960b-b36c-408b-af7e-3b5b312882fc\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.371425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"ec05960b-b36c-408b-af7e-3b5b312882fc\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.372131 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.372266 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.374446 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl" (OuterVolumeSpecName: "kube-api-access-tqdpl") pod "ec05960b-b36c-408b-af7e-3b5b312882fc" (UID: "ec05960b-b36c-408b-af7e-3b5b312882fc"). InnerVolumeSpecName "kube-api-access-tqdpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.398316 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data" (OuterVolumeSpecName: "config-data") pod "ec05960b-b36c-408b-af7e-3b5b312882fc" (UID: "ec05960b-b36c-408b-af7e-3b5b312882fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.473981 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.474414 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756743 4775 generic.go:334] "Generic (PLEG): container finished" podID="3429d990-e795-4241-bb25-8871be747a75" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756854 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerDied","Data":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756888 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerDied","Data":"69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756942 4775 scope.go:117] "RemoveContainer" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.757461 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768542 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768603 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768625 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerDied","Data":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768790 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerDied","Data":"b8e794aac4ed73289d855382e742fffe4df1b0a69c82118afa283730ec3b3a07"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.772424 4775 generic.go:334] "Generic (PLEG): container finished" podID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerID="a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.772508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerDied","Data":"a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.852245 4775 scope.go:117] "RemoveContainer" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: E0123 14:33:08.853038 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": container with ID starting with 9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da not found: ID does not exist" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.853089 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} err="failed to get container status \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": rpc error: code = NotFound desc = could not find container \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": container with ID starting with 9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da not found: ID does not exist" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.853118 4775 scope.go:117] "RemoveContainer" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.947394 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.962306 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.974181 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.990036 4775 scope.go:117] "RemoveContainer" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.991895 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:08 crc kubenswrapper[4775]: E0123 14:33:08.998450 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": container with ID starting with 736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2 not found: ID does not exist" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.998504 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} err="failed to get container status \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": rpc error: code = NotFound desc = could not find container \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": container with ID starting with 736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2 not found: ID does not exist" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.021266 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.091873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"cd9699c7-620b-45ed-9acf-d8d68558592a\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.091971 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"cd9699c7-620b-45ed-9acf-d8d68558592a\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.096352 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh" (OuterVolumeSpecName: "kube-api-access-qm4rh") pod "cd9699c7-620b-45ed-9acf-d8d68558592a" (UID: "cd9699c7-620b-45ed-9acf-d8d68558592a"). InnerVolumeSpecName "kube-api-access-qm4rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.113318 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data" (OuterVolumeSpecName: "config-data") pod "cd9699c7-620b-45ed-9acf-d8d68558592a" (UID: "cd9699c7-620b-45ed-9acf-d8d68558592a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.152548 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.199088 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.199146 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.300256 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.300349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.304464 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d" (OuterVolumeSpecName: "kube-api-access-csm4d") pod "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" (UID: "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b"). InnerVolumeSpecName "kube-api-access-csm4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.326936 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data" (OuterVolumeSpecName: "config-data") pod "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" (UID: "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.402988 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.403276 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.736771 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3429d990-e795-4241-bb25-8871be747a75" path="/var/lib/kubelet/pods/3429d990-e795-4241-bb25-8871be747a75/volumes" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.737944 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" path="/var/lib/kubelet/pods/ec05960b-b36c-408b-af7e-3b5b312882fc/volumes" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805229 4775 generic.go:334] "Generic (PLEG): container finished" podID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" exitCode=0 Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerDied","Data":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805340 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerDied","Data":"ff7647e6e88e0b90a60d3df6c11bd8f2d1a96b5234e9ed48e01eca64c74d9d98"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805363 4775 scope.go:117] "RemoveContainer" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.806932 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.807661 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerDied","Data":"8b77f3af6b0f185a5161fdaa5749b6a9f045ed71bd64a9e9cc6ddd8f8cc700d4"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.807731 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.838605 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842318 4775 scope.go:117] "RemoveContainer" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: E0123 14:33:09.842920 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": container with ID starting with 3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc not found: ID does not exist" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842952 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} err="failed to get container status \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": rpc error: code = NotFound desc = could not find container \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": container with ID starting with 3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc not found: ID does not exist" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842973 4775 scope.go:117] "RemoveContainer" containerID="a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.852560 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.863881 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.871800 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.630594 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.635729 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.723644 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.723895 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724015 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724141 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724235 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724580 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs" (OuterVolumeSpecName: "logs") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs" (OuterVolumeSpecName: "logs") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724961 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724993 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.729311 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv" (OuterVolumeSpecName: "kube-api-access-vthpv") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "kube-api-access-vthpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.729354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6" (OuterVolumeSpecName: "kube-api-access-xbjn6") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "kube-api-access-xbjn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.745668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data" (OuterVolumeSpecName: "config-data") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.747946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data" (OuterVolumeSpecName: "config-data") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.815818 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" exitCode=0 Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816223 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"915f8179b0a7a6e696f78019ea25b2951e8b526105585716102ad10d5d921fdc"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816273 4775 scope.go:117] "RemoveContainer" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816380 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822947 4775 generic.go:334] "Generic (PLEG): container finished" podID="93e53da4-e769-460a-b299-07131d928b83" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" exitCode=0 Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822977 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"60356a4b31a8069b59d95c548c20dccce95f5173efd6d074a66247e83d02c3f3"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.823017 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825643 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825662 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825673 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825685 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.842248 4775 scope.go:117] "RemoveContainer" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.859989 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.869161 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912215 4775 scope.go:117] "RemoveContainer" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.912862 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": container with ID starting with 99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3 not found: ID does not exist" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912911 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} err="failed to get container status \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": rpc error: code = NotFound desc = could not find container \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": container with ID starting with 99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912936 4775 scope.go:117] "RemoveContainer" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.913379 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": container with ID starting with 8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5 not found: ID does not exist" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.913501 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} err="failed to get container status \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": rpc error: code = NotFound desc = could not find container \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": container with ID starting with 8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.913606 4775 scope.go:117] "RemoveContainer" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.915315 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.922369 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.937146 4775 scope.go:117] "RemoveContainer" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.953401 4775 scope.go:117] "RemoveContainer" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.953785 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": container with ID starting with d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65 not found: ID does not exist" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.953904 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} err="failed to get container status \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": rpc error: code = NotFound desc = could not find container \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": container with ID starting with d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.954004 4775 scope.go:117] "RemoveContainer" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.954299 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": container with ID starting with a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc not found: ID does not exist" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.954408 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} err="failed to get container status \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": rpc error: code = NotFound desc = could not find container \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": container with ID starting with a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc not found: ID does not exist" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.727540 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" path="/var/lib/kubelet/pods/1f8451d7-e2c8-4d37-838f-b5042ceabc86/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.728797 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" path="/var/lib/kubelet/pods/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.729837 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e53da4-e769-460a-b299-07131d928b83" path="/var/lib/kubelet/pods/93e53da4-e769-460a-b299-07131d928b83/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.732418 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" path="/var/lib/kubelet/pods/cd9699c7-620b-45ed-9acf-d8d68558592a/volumes" Jan 23 14:33:19 crc kubenswrapper[4775]: I0123 14:33:19.714301 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:19 crc kubenswrapper[4775]: E0123 14:33:19.715462 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.364128 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.365016 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" containerID="cri-o://c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.365173 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" containerID="cri-o://03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.729065 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.729580 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.969342 4775 generic.go:334] "Generic (PLEG): container finished" podID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" exitCode=143 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.969395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.748986 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.751757 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.754397 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.754529 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:27 crc kubenswrapper[4775]: I0123 14:33:27.946592 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015695 4775 generic.go:334] "Generic (PLEG): container finished" podID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" exitCode=0 Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015737 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015762 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"5bbd58bc5eb6780b68e8d968266f41a0b7126273d93210d99f32930850e03151"} Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015781 4775 scope.go:117] "RemoveContainer" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.044334 4775 scope.go:117] "RemoveContainer" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.077773 4775 scope.go:117] "RemoveContainer" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: E0123 14:33:28.078367 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": container with ID starting with 03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762 not found: ID does not exist" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078438 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} err="failed to get container status \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": rpc error: code = NotFound desc = could not find container \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": container with ID starting with 03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762 not found: ID does not exist" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078472 4775 scope.go:117] "RemoveContainer" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: E0123 14:33:28.078956 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": container with ID starting with c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05 not found: ID does not exist" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078997 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} err="failed to get container status \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": rpc error: code = NotFound desc = could not find container \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": container with ID starting with c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05 not found: ID does not exist" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137763 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.138608 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs" (OuterVolumeSpecName: "logs") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.143500 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9" (OuterVolumeSpecName: "kube-api-access-c7gh9") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "kube-api-access-c7gh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.164010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data" (OuterVolumeSpecName: "config-data") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241005 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241055 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241075 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.359606 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.371270 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.727915 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" path="/var/lib/kubelet/pods/8da8e70a-bee6-4082-a0c5-8419ea3f86a6/volumes" Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.854738 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.989279 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"84473a0d-a6e7-41ab-8b88-07b8ed888950\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.989435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"84473a0d-a6e7-41ab-8b88-07b8ed888950\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.001150 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl" (OuterVolumeSpecName: "kube-api-access-m26pl") pod "84473a0d-a6e7-41ab-8b88-07b8ed888950" (UID: "84473a0d-a6e7-41ab-8b88-07b8ed888950"). InnerVolumeSpecName "kube-api-access-m26pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.033322 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data" (OuterVolumeSpecName: "config-data") pod "84473a0d-a6e7-41ab-8b88-07b8ed888950" (UID: "84473a0d-a6e7-41ab-8b88-07b8ed888950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041268 4775 generic.go:334] "Generic (PLEG): container finished" podID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" exitCode=0 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041331 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerDied","Data":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041369 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerDied","Data":"b44ad7319eff2652d4ad8fadab672eed48adfae26f3c8e4cc8c6eb5f3b5d2bc0"} Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041403 4775 scope.go:117] "RemoveContainer" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.074970 4775 scope.go:117] "RemoveContainer" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: E0123 14:33:30.075516 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": container with ID starting with dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678 not found: ID does not exist" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.075599 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} err="failed to get container status \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": rpc error: code = NotFound desc = could not find container \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": container with ID starting with dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678 not found: ID does not exist" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.092613 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.092662 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.102788 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.110235 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.449317 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.449579 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.570686 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.571253 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" containerID="cri-o://64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.573046 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.733260 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.733453 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" gracePeriod=30 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.013230 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.019572 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.028384 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.034448 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.051155 4775 generic.go:334] "Generic (PLEG): container finished" podID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerID="64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" exitCode=143 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.051204 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9"} Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.054974 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055291 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055312 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055328 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055337 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055346 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055352 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055358 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055375 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055381 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055392 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055397 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055408 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055414 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055426 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055431 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055440 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055448 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055458 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055464 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055474 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055480 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055491 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055497 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055508 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055522 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055528 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055539 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055545 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055555 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055561 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055570 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055575 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055704 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055712 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055719 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055728 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055738 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055748 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055755 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055763 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055771 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055780 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055787 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055795 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055818 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055826 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055835 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055842 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055851 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.056335 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.065496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.102945 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.103879 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.124631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.164968 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.173306 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.175903 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208604 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208733 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.291101 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.291298 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" gracePeriod=30 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310339 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310432 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310696 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310767 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.311443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.311495 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.330386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.331252 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.372144 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.411943 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.412006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.412911 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.420439 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.434900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.492078 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.715981 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.716587 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.729961 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" path="/var/lib/kubelet/pods/84473a0d-a6e7-41ab-8b88-07b8ed888950/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.731028 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" path="/var/lib/kubelet/pods/bc9f9b55-ea71-4396-82bf-2a49788ccc42/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.731769 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" path="/var/lib/kubelet/pods/f751d2a1-4497-4fb2-9c13-af54db584a48/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.977494 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.051005 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.062620 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerID="b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" exitCode=0 Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.062676 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerDied","Data":"b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b"} Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.066673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerStarted","Data":"d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643"} Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.143162 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.166998 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.241001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.241487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.254975 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm" (OuterVolumeSpecName: "kube-api-access-7kpdm") pod "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" (UID: "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760"). InnerVolumeSpecName "kube-api-access-7kpdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.270246 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data" (OuterVolumeSpecName: "config-data") pod "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" (UID: "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.344164 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.344194 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.432725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.444865 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"daaf7413-398a-4a39-a375-c130187f9726\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.444925 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"daaf7413-398a-4a39-a375-c130187f9726\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.448726 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr" (OuterVolumeSpecName: "kube-api-access-r5zwr") pod "daaf7413-398a-4a39-a375-c130187f9726" (UID: "daaf7413-398a-4a39-a375-c130187f9726"). InnerVolumeSpecName "kube-api-access-r5zwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.466024 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data" (OuterVolumeSpecName: "config-data") pod "daaf7413-398a-4a39-a375-c130187f9726" (UID: "daaf7413-398a-4a39-a375-c130187f9726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.546368 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.546402 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.716346 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.747793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.747884 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.820370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz" (OuterVolumeSpecName: "kube-api-access-c7tvz") pod "4e279d5d-df37-483b-9bc7-682b48b2dbc4" (UID: "4e279d5d-df37-483b-9bc7-682b48b2dbc4"). InnerVolumeSpecName "kube-api-access-c7tvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.825261 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data" (OuterVolumeSpecName: "config-data") pod "4e279d5d-df37-483b-9bc7-682b48b2dbc4" (UID: "4e279d5d-df37-483b-9bc7-682b48b2dbc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.852988 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.853041 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.077486 4775 generic.go:334] "Generic (PLEG): container finished" podID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerID="f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.077672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerDied","Data":"f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079398 4775 generic.go:334] "Generic (PLEG): container finished" podID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerID="edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerDied","Data":"edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerStarted","Data":"85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.080947 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerDied","Data":"73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.080988 4775 scope.go:117] "RemoveContainer" containerID="b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.081000 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.083765 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerDied","Data":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerDied","Data":"004f895311337c942728dd641397c9a9477c224ca4d5348fe186974622dce3f9"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084261 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.101341 4775 scope.go:117] "RemoveContainer" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102116 4775 generic.go:334] "Generic (PLEG): container finished" podID="66eb744b-ea4a-4973-8492-2d652c20c447" containerID="8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102179 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerDied","Data":"8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102209 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerStarted","Data":"66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104488 4775 generic.go:334] "Generic (PLEG): container finished" podID="daaf7413-398a-4a39-a375-c130187f9726" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerDied","Data":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerDied","Data":"97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104924 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.129282 4775 scope.go:117] "RemoveContainer" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: E0123 14:33:33.131448 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": container with ID starting with e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a not found: ID does not exist" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.131487 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} err="failed to get container status \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": rpc error: code = NotFound desc = could not find container \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": container with ID starting with e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a not found: ID does not exist" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.131514 4775 scope.go:117] "RemoveContainer" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.152467 4775 scope.go:117] "RemoveContainer" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: E0123 14:33:33.153039 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": container with ID starting with 3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a not found: ID does not exist" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.153084 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} err="failed to get container status \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": rpc error: code = NotFound desc = could not find container \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": container with ID starting with 3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a not found: ID does not exist" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.171920 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.180748 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.194508 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.202639 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.208840 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.214688 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.722070 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" path="/var/lib/kubelet/pods/4e279d5d-df37-483b-9bc7-682b48b2dbc4/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.722905 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" path="/var/lib/kubelet/pods/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.723397 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daaf7413-398a-4a39-a375-c130187f9726" path="/var/lib/kubelet/pods/daaf7413-398a-4a39-a375-c130187f9726/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.759305 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": read tcp 10.217.0.2:38066->10.217.0.163:8775: read: connection reset by peer" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.759328 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": read tcp 10.217.0.2:38072->10.217.0.163:8775: read: connection reset by peer" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.120161 4775 generic.go:334] "Generic (PLEG): container finished" podID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerID="2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" exitCode=0 Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.120349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e"} Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.204665 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377392 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377483 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377511 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.378179 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs" (OuterVolumeSpecName: "logs") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.405354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl" (OuterVolumeSpecName: "kube-api-access-skdhl") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "kube-api-access-skdhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.417754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data" (OuterVolumeSpecName: "config-data") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.441562 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.456629 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480743 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480782 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480793 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.497467 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581687 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581812 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"66eb744b-ea4a-4973-8492-2d652c20c447\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581944 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"66eb744b-ea4a-4973-8492-2d652c20c447\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.582376 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f14b26a-2160-432f-a6cf-f3fab1f31afc" (UID: "3f14b26a-2160-432f-a6cf-f3fab1f31afc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.582925 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66eb744b-ea4a-4973-8492-2d652c20c447" (UID: "66eb744b-ea4a-4973-8492-2d652c20c447"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.585642 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx" (OuterVolumeSpecName: "kube-api-access-9mfzx") pod "3f14b26a-2160-432f-a6cf-f3fab1f31afc" (UID: "3f14b26a-2160-432f-a6cf-f3fab1f31afc"). InnerVolumeSpecName "kube-api-access-9mfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.586180 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z" (OuterVolumeSpecName: "kube-api-access-l274z") pod "66eb744b-ea4a-4973-8492-2d652c20c447" (UID: "66eb744b-ea4a-4973-8492-2d652c20c447"). InnerVolumeSpecName "kube-api-access-l274z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.683765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"c25c2a05-1d9b-4551-9c92-f04da2897895\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.683962 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"c25c2a05-1d9b-4551-9c92-f04da2897895\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684415 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c25c2a05-1d9b-4551-9c92-f04da2897895" (UID: "c25c2a05-1d9b-4551-9c92-f04da2897895"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684526 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684548 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684562 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684574 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.688995 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l" (OuterVolumeSpecName: "kube-api-access-4ls2l") pod "c25c2a05-1d9b-4551-9c92-f04da2897895" (UID: "c25c2a05-1d9b-4551-9c92-f04da2897895"). InnerVolumeSpecName "kube-api-access-4ls2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.785954 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.786008 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129165 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerDied","Data":"66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129830 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129213 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.130898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerDied","Data":"d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.131016 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.130966 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132733 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132757 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132839 4775 scope.go:117] "RemoveContainer" containerID="2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.135713 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerDied","Data":"85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.135753 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.136081 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.157204 4775 scope.go:117] "RemoveContainer" containerID="64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.199938 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.206216 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.733368 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" path="/var/lib/kubelet/pods/08cc29e8-1d83-4f1e-b343-a813a06c7f5a/volumes" Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.064132 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.070027 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.082201 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.090122 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.095678 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.101149 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.169776 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.178467 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.189903 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.196066 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.202503 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.207655 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.269071 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.277604 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.295589 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.304866 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.312627 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.322597 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.053301 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.059823 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.156:6080/vnc_lite.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.060853 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.735837 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" path="/var/lib/kubelet/pods/3f14b26a-2160-432f-a6cf-f3fab1f31afc/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.737909 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" path="/var/lib/kubelet/pods/500dfca1-a7c0-488c-89ba-2d750245e322/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.748370 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" path="/var/lib/kubelet/pods/5a0d129e-9a65-484c-b8a6-ca5a0120d95d/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.749066 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" path="/var/lib/kubelet/pods/5b494b92-3cd1-4b60-853c-a135bb158d8c/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.750149 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" path="/var/lib/kubelet/pods/66eb744b-ea4a-4973-8492-2d652c20c447/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.750891 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" path="/var/lib/kubelet/pods/68223c6c-51af-4369-87c2-368ffe71edb7/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.751570 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" path="/var/lib/kubelet/pods/7a5345f7-7dc8-4e09-8566-ee1dbb897cce/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.753257 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" path="/var/lib/kubelet/pods/a9857104-b2d2-4b42-a96d-2f9f1fadc406/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.754084 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" path="/var/lib/kubelet/pods/c25c2a05-1d9b-4551-9c92-f04da2897895/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.754905 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" path="/var/lib/kubelet/pods/ffe262ed-6f79-4dad-91c6-168b164a6459/volumes" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.077451 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.091487 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.098227 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.107434 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.115214 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.123318 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.702610 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703104 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703135 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703161 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703173 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703193 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703206 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703230 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703242 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703268 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703281 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703294 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703306 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703325 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703337 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703353 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703606 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703623 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703639 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703662 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703700 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703723 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703741 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.704535 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.714170 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.788906 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.790004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.795629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.856618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.856899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.893559 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.895111 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.899588 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.900591 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.902109 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.905882 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.917076 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958486 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958571 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.959351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.984091 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.022648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060064 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060141 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060200 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060315 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060339 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.061311 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.080360 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.103815 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.119428 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.120284 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.121951 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.140543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161446 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161581 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.162169 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.162969 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.191556 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.209010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.211103 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.217225 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.262354 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.262488 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.340574 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.344995 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.348623 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.353286 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.366383 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.366475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.367439 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.384661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.435112 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.467898 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.468223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.569445 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.569531 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.570393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.584931 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.669329 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.685336 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.693063 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.740279 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" path="/var/lib/kubelet/pods/2887a864-f392-4887-8b38-bde90ef8f18d/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.740788 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" path="/var/lib/kubelet/pods/c7a04db9-60c9-4bce-8100-18a4134d0c86/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.741278 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" path="/var/lib/kubelet/pods/da477c0f-52c9-4e94-894f-d953e46afd95/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.906150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.922847 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.929698 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.096017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:40 crc kubenswrapper[4775]: W0123 14:33:40.178693 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb95fa161_1171_4dc2_b0be_3aa279cb717d.slice/crio-3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06 WatchSource:0}: Error finding container 3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06: Status 404 returned error can't find the container with id 3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.221507 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerStarted","Data":"3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.223773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerStarted","Data":"1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226359 4775 generic.go:334] "Generic (PLEG): container finished" podID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerID="7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3" exitCode=0 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerDied","Data":"7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerStarted","Data":"9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227853 4775 generic.go:334] "Generic (PLEG): container finished" podID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerID="d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4" exitCode=0 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerDied","Data":"d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227925 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerStarted","Data":"aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.229462 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerStarted","Data":"d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.232021 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerStarted","Data":"f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.244385 4775 generic.go:334] "Generic (PLEG): container finished" podID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerID="5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.244491 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerDied","Data":"5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.248029 4775 generic.go:334] "Generic (PLEG): container finished" podID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerID="f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.248096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerDied","Data":"f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.250660 4775 generic.go:334] "Generic (PLEG): container finished" podID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerID="61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.250750 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerDied","Data":"61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.253238 4775 generic.go:334] "Generic (PLEG): container finished" podID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerID="de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.253305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerDied","Data":"de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.738928 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.803018 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"e86f57ad-0eba-4794-8f64-f70609e535e8\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906912 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906998 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"e86f57ad-0eba-4794-8f64-f70609e535e8\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.907018 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.908411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e86f57ad-0eba-4794-8f64-f70609e535e8" (UID: "e86f57ad-0eba-4794-8f64-f70609e535e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.909123 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" (UID: "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.913942 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv" (OuterVolumeSpecName: "kube-api-access-l58vv") pod "e86f57ad-0eba-4794-8f64-f70609e535e8" (UID: "e86f57ad-0eba-4794-8f64-f70609e535e8"). InnerVolumeSpecName "kube-api-access-l58vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.931685 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd" (OuterVolumeSpecName: "kube-api-access-m56dd") pod "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" (UID: "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c"). InnerVolumeSpecName "kube-api-access-m56dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008518 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008549 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008559 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008571 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277126 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerDied","Data":"9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6"} Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277399 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.279982 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerDied","Data":"aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d"} Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.280040 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.280045 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.531765 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.619181 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"112204a1-12d6-49b5-b97e-de4daab49dcf\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.619369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"112204a1-12d6-49b5-b97e-de4daab49dcf\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.620759 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "112204a1-12d6-49b5-b97e-de4daab49dcf" (UID: "112204a1-12d6-49b5-b97e-de4daab49dcf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.627512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t" (OuterVolumeSpecName: "kube-api-access-vn74t") pod "112204a1-12d6-49b5-b97e-de4daab49dcf" (UID: "112204a1-12d6-49b5-b97e-de4daab49dcf"). InnerVolumeSpecName "kube-api-access-vn74t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.721758 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.721794 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.880718 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.887153 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.896718 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"b95fa161-1171-4dc2-b0be-3aa279cb717d\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028459 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"7ff6b200-7364-4e13-956d-628abd48cbaa\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"7ff6b200-7364-4e13-956d-628abd48cbaa\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028569 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"b95fa161-1171-4dc2-b0be-3aa279cb717d\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028698 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.029517 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4b0dbf6-948b-45c4-b5a0-6027f816c873" (UID: "c4b0dbf6-948b-45c4-b5a0-6027f816c873"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.030125 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b95fa161-1171-4dc2-b0be-3aa279cb717d" (UID: "b95fa161-1171-4dc2-b0be-3aa279cb717d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.030256 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ff6b200-7364-4e13-956d-628abd48cbaa" (UID: "7ff6b200-7364-4e13-956d-628abd48cbaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.034133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv" (OuterVolumeSpecName: "kube-api-access-drkqv") pod "c4b0dbf6-948b-45c4-b5a0-6027f816c873" (UID: "c4b0dbf6-948b-45c4-b5a0-6027f816c873"). InnerVolumeSpecName "kube-api-access-drkqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.034870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp" (OuterVolumeSpecName: "kube-api-access-7tbdp") pod "b95fa161-1171-4dc2-b0be-3aa279cb717d" (UID: "b95fa161-1171-4dc2-b0be-3aa279cb717d"). InnerVolumeSpecName "kube-api-access-7tbdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.035948 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4" (OuterVolumeSpecName: "kube-api-access-g5wb4") pod "7ff6b200-7364-4e13-956d-628abd48cbaa" (UID: "7ff6b200-7364-4e13-956d-628abd48cbaa"). InnerVolumeSpecName "kube-api-access-g5wb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130752 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130796 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130827 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130836 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130845 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130855 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295502 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerDied","Data":"d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295582 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299605 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299613 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerDied","Data":"f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299675 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302521 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerDied","Data":"3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302554 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302580 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305160 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerDied","Data":"1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305217 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305357 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.641556 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642178 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642191 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642212 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642223 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642229 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642242 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642248 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642258 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642265 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642274 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642280 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642414 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642428 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642436 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642445 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642461 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642467 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642942 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.644631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-289sx" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.645033 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.649101 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.714696 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.714938 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.734864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.735994 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.745295 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.748087 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.758075 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.758228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859440 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859844 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859946 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.867591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.889195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.960944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.961038 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.965646 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.992725 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.008150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.054723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.472358 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.478549 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.588222 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:45 crc kubenswrapper[4775]: W0123 14:33:45.591901 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51e63565_a2ef_4d12_af2f_f3dc6c2942d9.slice/crio-176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274 WatchSource:0}: Error finding container 176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274: Status 404 returned error can't find the container with id 176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274 Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.959382 4775 scope.go:117] "RemoveContainer" containerID="4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.983865 4775 scope.go:117] "RemoveContainer" containerID="a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.013392 4775 scope.go:117] "RemoveContainer" containerID="45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.048646 4775 scope.go:117] "RemoveContainer" containerID="4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.106983 4775 scope.go:117] "RemoveContainer" containerID="750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.123022 4775 scope.go:117] "RemoveContainer" containerID="711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.160211 4775 scope.go:117] "RemoveContainer" containerID="204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.352920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerStarted","Data":"3b893ae1dbc88ba1326e6a0a0bd54925381cdc400ec55f87f58040e0b56c3ac3"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.359019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerStarted","Data":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.359073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerStarted","Data":"176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.383302 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.38318019 podStartE2EDuration="2.38318019s" podCreationTimestamp="2026-01-23 14:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:33:46.374074573 +0000 UTC m=+1773.368903313" watchObservedRunningTime="2026-01-23 14:33:46.38318019 +0000 UTC m=+1773.378008930" Jan 23 14:33:50 crc kubenswrapper[4775]: I0123 14:33:50.055268 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.055621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.075322 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.449937 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.040891 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.055419 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.456394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerStarted","Data":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.456683 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.479508 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.696730101 podStartE2EDuration="13.479491525s" podCreationTimestamp="2026-01-23 14:33:44 +0000 UTC" firstStartedPulling="2026-01-23 14:33:45.478173156 +0000 UTC m=+1772.473001906" lastFinishedPulling="2026-01-23 14:33:56.26093455 +0000 UTC m=+1783.255763330" observedRunningTime="2026-01-23 14:33:57.474256357 +0000 UTC m=+1784.469085187" watchObservedRunningTime="2026-01-23 14:33:57.479491525 +0000 UTC m=+1784.474320275" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.508673 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.749177 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" path="/var/lib/kubelet/pods/ccc48032-9af5-4d79-bc89-f7d576911b23/volumes" Jan 23 14:33:58 crc kubenswrapper[4775]: I0123 14:33:58.714400 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:58 crc kubenswrapper[4775]: E0123 14:33:58.714979 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.636413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.644944 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.669495 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.676726 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.723125 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" path="/var/lib/kubelet/pods/b47b9373-0dd5-4635-a8f9-06aa0fc60174/volumes" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.723763 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4701d5c-309d-4969-852b-83626330e0df" path="/var/lib/kubelet/pods/c4701d5c-309d-4969-852b-83626330e0df/volumes" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.812160 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.814116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.816188 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.817516 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.829412 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.878195 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.879418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.882065 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.885123 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.887660 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918525 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918606 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019662 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019705 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019750 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.028154 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.029107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.031639 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.032613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.039401 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.041711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.129620 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.192915 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.632362 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:02 crc kubenswrapper[4775]: W0123 14:34:02.642281 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod004165d0_70f3_4e04_8f77_1342a98147bb.slice/crio-e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4 WatchSource:0}: Error finding container e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4: Status 404 returned error can't find the container with id e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4 Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.693576 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.517469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerStarted","Data":"6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.518077 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerStarted","Data":"c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.520525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerStarted","Data":"2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.520564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerStarted","Data":"e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.543347 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" podStartSLOduration=2.543331343 podStartE2EDuration="2.543331343s" podCreationTimestamp="2026-01-23 14:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:03.533066794 +0000 UTC m=+1790.527895574" watchObservedRunningTime="2026-01-23 14:34:03.543331343 +0000 UTC m=+1790.538160073" Jan 23 14:34:05 crc kubenswrapper[4775]: I0123 14:34:05.537739 4775 generic.go:334] "Generic (PLEG): container finished" podID="004165d0-70f3-4e04-8f77-1342a98147bb" containerID="2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05" exitCode=0 Jan 23 14:34:05 crc kubenswrapper[4775]: I0123 14:34:05.538137 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerDied","Data":"2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05"} Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.032049 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.049430 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.878937 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919479 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919572 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.937115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb" (OuterVolumeSpecName: "kube-api-access-wg8rb") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "kube-api-access-wg8rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.942730 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts" (OuterVolumeSpecName: "scripts") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.973972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data" (OuterVolumeSpecName: "config-data") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021571 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021606 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021620 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.559943 4775 generic.go:334] "Generic (PLEG): container finished" podID="f25e3b63-3402-4d38-8f18-e4f015797854" containerID="6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393" exitCode=0 Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.560041 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerDied","Data":"6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393"} Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.562927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerDied","Data":"e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4"} Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.562965 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.563084 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.730248 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c017749-eae9-4edd-91eb-21b25275a986" path="/var/lib/kubelet/pods/2c017749-eae9-4edd-91eb-21b25275a986/volumes" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997137 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:07 crc kubenswrapper[4775]: E0123 14:34:07.997527 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997544 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997755 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.998565 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.001631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.032111 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.146895 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.147038 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.248967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.249146 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.262222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.273691 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.322493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.560924 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.577167 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerStarted","Data":"993d5972eb5c6f4c100b944f0126ed4f2e54f4d9412dabbd89c853572013d71a"} Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.828553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870405 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.888934 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts" (OuterVolumeSpecName: "scripts") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.893140 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26" (OuterVolumeSpecName: "kube-api-access-fsd26") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "kube-api-access-fsd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.904098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data" (OuterVolumeSpecName: "config-data") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973024 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973083 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973109 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.599980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerStarted","Data":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.600448 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602607 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerDied","Data":"c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5"} Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602639 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602732 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.639042 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.6390054 podStartE2EDuration="2.6390054s" podCreationTimestamp="2026-01-23 14:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:09.622216456 +0000 UTC m=+1796.617045206" watchObservedRunningTime="2026-01-23 14:34:09.6390054 +0000 UTC m=+1796.633834180" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.675608 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:09 crc kubenswrapper[4775]: E0123 14:34:09.676068 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676091 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676280 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.679271 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.683219 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.683321 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.684021 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.792517 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.792653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.798175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.823251 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.004215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.267064 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:10 crc kubenswrapper[4775]: W0123 14:34:10.275944 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b007ba6_d3ea_4b9d_b325_3ffabb38bdfa.slice/crio-862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549 WatchSource:0}: Error finding container 862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549: Status 404 returned error can't find the container with id 862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549 Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.611957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerStarted","Data":"862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549"} Jan 23 14:34:11 crc kubenswrapper[4775]: I0123 14:34:11.622057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerStarted","Data":"8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7"} Jan 23 14:34:11 crc kubenswrapper[4775]: I0123 14:34:11.623539 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:12 crc kubenswrapper[4775]: I0123 14:34:12.713905 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:34:12 crc kubenswrapper[4775]: E0123 14:34:12.714433 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.359254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.378498 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=4.37848056 podStartE2EDuration="4.37848056s" podCreationTimestamp="2026-01-23 14:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:11.657408272 +0000 UTC m=+1798.652237012" watchObservedRunningTime="2026-01-23 14:34:13.37848056 +0000 UTC m=+1800.373309290" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.893548 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.894667 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.897358 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.899542 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.911530 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.932677 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.933673 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.948614 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007655 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007775 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007942 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007976 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.036572 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.050155 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.108912 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109188 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109497 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109609 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.115718 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.117505 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.117682 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.118120 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.132411 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.132789 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.210910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.247433 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: W0123 14:34:14.674600 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb75fd7c_ba91_4090_ac20_0009c06598f3.slice/crio-0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9 WatchSource:0}: Error finding container 0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9: Status 404 returned error can't find the container with id 0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9 Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.680236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.730777 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:14 crc kubenswrapper[4775]: W0123 14:34:14.733687 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec6263e3_855a_48e5_ae77_25462d7e5a13.slice/crio-77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3 WatchSource:0}: Error finding container 77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3: Status 404 returned error can't find the container with id 77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3 Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.033775 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.493856 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.495040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.498379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.498545 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.506736 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650311 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.662279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.662326 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.663523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerStarted","Data":"ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.663647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerStarted","Data":"0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.698674 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" podStartSLOduration=2.698658871 podStartE2EDuration="2.698658871s" podCreationTimestamp="2026-01-23 14:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:15.697896569 +0000 UTC m=+1802.692725309" watchObservedRunningTime="2026-01-23 14:34:15.698658871 +0000 UTC m=+1802.693487611" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.714416 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" podStartSLOduration=2.714401455 podStartE2EDuration="2.714401455s" podCreationTimestamp="2026-01-23 14:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:15.711003479 +0000 UTC m=+1802.705832209" watchObservedRunningTime="2026-01-23 14:34:15.714401455 +0000 UTC m=+1802.709230195" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.723982 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" path="/var/lib/kubelet/pods/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6/volumes" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753020 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.759585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.765445 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.770766 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.776656 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.785457 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.787270 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.791236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.797828 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.798796 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.810489 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.832288 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855268 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855354 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855419 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.860711 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.889252 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.890365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.896345 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.920655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.956653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957051 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957190 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957214 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.968406 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.975585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.979441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.998135 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059156 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.149441 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160300 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160381 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.161195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.165380 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.166960 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.179591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.244651 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.356396 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.671680 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerStarted","Data":"8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f"} Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.672141 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerStarted","Data":"156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0"} Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.686168 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.688740 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" podStartSLOduration=1.6887235760000001 podStartE2EDuration="1.688723576s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:16.688673094 +0000 UTC m=+1803.683501834" watchObservedRunningTime="2026-01-23 14:34:16.688723576 +0000 UTC m=+1803.683552316" Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.390997 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.404162 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.695172 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.695474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"c6887ae1f93aec0d07d00ff51cf7a1b2b8059f436065d03b9bea93ac8509208b"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.697034 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerStarted","Data":"6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.697112 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerStarted","Data":"72284d5a64aac0d3ed9862d805136b464b664d4d4255f42e29391ce52dc5c6db"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.701381 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.701423 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"a78d09621585651729bf825f6c0c7c87c9c74cae0df8c0c5b2cea147df0c160b"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.719575 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.719560983 podStartE2EDuration="2.719560983s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:17.717730991 +0000 UTC m=+1804.712559731" watchObservedRunningTime="2026-01-23 14:34:17.719560983 +0000 UTC m=+1804.714389723" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.716558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.718753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.721928 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" exitCode=255 Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.721958 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.722433 4775 scope.go:117] "RemoveContainer" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.752443 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.752415596 podStartE2EDuration="3.752415596s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:18.735098197 +0000 UTC m=+1805.729926937" watchObservedRunningTime="2026-01-23 14:34:18.752415596 +0000 UTC m=+1805.747244366" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.769722 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.769699154 podStartE2EDuration="3.769699154s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:18.755945576 +0000 UTC m=+1805.750774366" watchObservedRunningTime="2026-01-23 14:34:18.769699154 +0000 UTC m=+1805.764527914" Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.033375 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.043995 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.723883 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" path="/var/lib/kubelet/pods/b5498924-f821-48fa-88a0-6d8c0c7c01de/volumes" Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.736640 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f"} Jan 23 14:34:20 crc kubenswrapper[4775]: I0123 14:34:20.748968 4775 generic.go:334] "Generic (PLEG): container finished" podID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerID="ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1" exitCode=0 Jan 23 14:34:20 crc kubenswrapper[4775]: I0123 14:34:20.749385 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerDied","Data":"ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1"} Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.167718 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.245276 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.245352 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764138 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerID="3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f" exitCode=0 Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764261 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f"} Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764345 4775 scope.go:117] "RemoveContainer" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.774581 4775 generic.go:334] "Generic (PLEG): container finished" podID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerID="8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f" exitCode=0 Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.774641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerDied","Data":"8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f"} Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.184642 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336318 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336410 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336439 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.343400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts" (OuterVolumeSpecName: "scripts") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.353993 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk" (OuterVolumeSpecName: "kube-api-access-wdkdk") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "kube-api-access-wdkdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.381769 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data" (OuterVolumeSpecName: "config-data") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438117 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438151 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438160 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerDied","Data":"0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9"} Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787205 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787227 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990185 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990726 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" containerID="cri-o://5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" gracePeriod=30 Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990878 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" containerID="cri-o://eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.020953 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.021122 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059288 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059474 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" containerID="cri-o://0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059843 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" gracePeriod=30 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.377831 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.382855 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558820 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558999 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559175 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564293 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp" (OuterVolumeSpecName: "kube-api-access-snmpp") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "kube-api-access-snmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564321 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts" (OuterVolumeSpecName: "scripts") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564978 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts" (OuterVolumeSpecName: "scripts") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.565435 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr" (OuterVolumeSpecName: "kube-api-access-4r6jr") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "kube-api-access-4r6jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.582707 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data" (OuterVolumeSpecName: "config-data") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.586457 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data" (OuterVolumeSpecName: "config-data") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660623 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660664 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660678 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660692 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660705 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660717 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810589 4775 generic.go:334] "Generic (PLEG): container finished" podID="55b13561-f097-4a68-bc50-482d017d838d" containerID="eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810634 4775 generic.go:334] "Generic (PLEG): container finished" podID="55b13561-f097-4a68-bc50-482d017d838d" containerID="5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" exitCode=143 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.812910 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.813097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.813145 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819101 4775 generic.go:334] "Generic (PLEG): container finished" podID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerID="4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819137 4775 generic.go:334] "Generic (PLEG): container finished" podID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerID="0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" exitCode=143 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819236 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerDied","Data":"156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832105 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832320 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.469815 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.476724 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577857 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577894 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577927 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.578010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.578898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs" (OuterVolumeSpecName: "logs") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579030 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs" (OuterVolumeSpecName: "logs") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579311 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579340 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.584952 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj" (OuterVolumeSpecName: "kube-api-access-qqghj") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "kube-api-access-qqghj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.598644 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls" (OuterVolumeSpecName: "kube-api-access-9vfls") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "kube-api-access-9vfls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.604259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data" (OuterVolumeSpecName: "config-data") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.609009 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data" (OuterVolumeSpecName: "config-data") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681125 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681183 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681195 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681213 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847063 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"a78d09621585651729bf825f6c0c7c87c9c74cae0df8c0c5b2cea147df0c160b"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847108 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847122 4775 scope.go:117] "RemoveContainer" containerID="4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.850113 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.850128 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"c6887ae1f93aec0d07d00ff51cf7a1b2b8059f436065d03b9bea93ac8509208b"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.852875 4775 generic.go:334] "Generic (PLEG): container finished" podID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerID="6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.852920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerDied","Data":"6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.881502 4775 scope.go:117] "RemoveContainer" containerID="0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.887690 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.901747 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.918559 4775 scope.go:117] "RemoveContainer" containerID="eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.938864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939321 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939379 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939449 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939495 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939558 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939615 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939675 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939720 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939766 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939834 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939886 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939930 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939978 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940021 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940196 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940258 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940307 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940354 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940398 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940447 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940497 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940549 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.940759 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.941539 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.945709 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.949054 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.958290 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.968447 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.977176 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.978203 4775 scope.go:117] "RemoveContainer" containerID="5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.978701 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.987328 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.993422 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.064269 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.088773 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089125 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"9cdb17e1-4872-47c6-a39d-eac9257959bf\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"9cdb17e1-4872-47c6-a39d-eac9257959bf\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191144 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191187 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191987 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.192267 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.196743 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc" (OuterVolumeSpecName: "kube-api-access-tnzdc") pod "9cdb17e1-4872-47c6-a39d-eac9257959bf" (UID: "9cdb17e1-4872-47c6-a39d-eac9257959bf"). InnerVolumeSpecName "kube-api-access-tnzdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.197466 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.204913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.210071 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.217184 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.234423 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data" (OuterVolumeSpecName: "config-data") pod "9cdb17e1-4872-47c6-a39d-eac9257959bf" (UID: "9cdb17e1-4872-47c6-a39d-eac9257959bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.282458 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.293047 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.293085 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.307874 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.729698 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b13561-f097-4a68-bc50-482d017d838d" path="/var/lib/kubelet/pods/55b13561-f097-4a68-bc50-482d017d838d/volumes" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.731497 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" path="/var/lib/kubelet/pods/e661567d-01ce-42ba-8257-c8a031e45a0f/volumes" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.816511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.882996 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerDied","Data":"72284d5a64aac0d3ed9862d805136b464b664d4d4255f42e29391ce52dc5c6db"} Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.883040 4775 scope.go:117] "RemoveContainer" containerID="6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.883125 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.893029 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"3ef3e20b260f3e98c87c0a0151aead3f8b34244b446f78a1aa8e60eef7375188"} Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.902171 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.932086 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.966370 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974389 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: E0123 14:34:25.974786 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974816 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974985 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.977704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.979533 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.980242 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.117582 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.117654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.218973 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.219404 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.227358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.239328 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.293762 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.714467 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.863349 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:26 crc kubenswrapper[4775]: W0123 14:34:26.869315 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d37972_46f4_48e0_a566_6984e8794cc4.slice/crio-eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb WatchSource:0}: Error finding container eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb: Status 404 returned error can't find the container with id eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.913303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"cb5995d7a3a4ccb1bfe7a5d8b13bf7003f6838f5a23bd25715820c86d434289d"} Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.914433 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerStarted","Data":"eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb"} Jan 23 14:34:27 crc kubenswrapper[4775]: I0123 14:34:27.726147 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" path="/var/lib/kubelet/pods/9cdb17e1-4872-47c6-a39d-eac9257959bf/volumes" Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.949354 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerStarted","Data":"82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.956060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.956107 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.963548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.963584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.968061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.972891 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=3.972872524 podStartE2EDuration="3.972872524s" podCreationTimestamp="2026-01-23 14:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:28.969879304 +0000 UTC m=+1815.964708044" watchObservedRunningTime="2026-01-23 14:34:28.972872524 +0000 UTC m=+1815.967701254" Jan 23 14:34:29 crc kubenswrapper[4775]: I0123 14:34:29.040849 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=5.040828115 podStartE2EDuration="5.040828115s" podCreationTimestamp="2026-01-23 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:29.015238416 +0000 UTC m=+1816.010067156" watchObservedRunningTime="2026-01-23 14:34:29.040828115 +0000 UTC m=+1816.035656855" Jan 23 14:34:29 crc kubenswrapper[4775]: I0123 14:34:29.056258 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=5.05622997 podStartE2EDuration="5.05622997s" podCreationTimestamp="2026-01-23 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:29.039369856 +0000 UTC m=+1816.034198596" watchObservedRunningTime="2026-01-23 14:34:29.05622997 +0000 UTC m=+1816.051058750" Jan 23 14:34:30 crc kubenswrapper[4775]: I0123 14:34:30.308884 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:30 crc kubenswrapper[4775]: I0123 14:34:30.309269 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:31 crc kubenswrapper[4775]: I0123 14:34:31.294654 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.283637 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.284334 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.308590 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.308671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.294421 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.326590 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449016 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449064 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449009 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449104 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:37 crc kubenswrapper[4775]: I0123 14:34:37.099379 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.287363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.288219 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.289840 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.291058 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.318894 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.323254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.327097 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.186271 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.189792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.192268 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.656018 4775 scope.go:117] "RemoveContainer" containerID="b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.688539 4775 scope.go:117] "RemoveContainer" containerID="29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.738241 4775 scope.go:117] "RemoveContainer" containerID="4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.772113 4775 scope.go:117] "RemoveContainer" containerID="03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a" Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.031488 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.037136 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.044902 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.045409 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.064443 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.064686 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" containerID="cri-o://df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.065118 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" containerID="cri-o://9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.112986 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.113217 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.302727 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.302938 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.373630 4775 generic.go:334] "Generic (PLEG): container finished" podID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerID="df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" exitCode=143 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.373672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be"} Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.294766 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.295493 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.296039 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.296125 4775 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.385843 4775 generic.go:334] "Generic (PLEG): container finished" podID="37d37972-46f4-48e0-a566-6984e8794cc4" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" exitCode=0 Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.385913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerDied","Data":"82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d"} Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.577725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.665144 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"37d37972-46f4-48e0-a566-6984e8794cc4\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.665280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"37d37972-46f4-48e0-a566-6984e8794cc4\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.676280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7" (OuterVolumeSpecName: "kube-api-access-qxqm7") pod "37d37972-46f4-48e0-a566-6984e8794cc4" (UID: "37d37972-46f4-48e0-a566-6984e8794cc4"). InnerVolumeSpecName "kube-api-access-qxqm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.705018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data" (OuterVolumeSpecName: "config-data") pod "37d37972-46f4-48e0-a566-6984e8794cc4" (UID: "37d37972-46f4-48e0-a566-6984e8794cc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.767167 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.767208 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400759 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerDied","Data":"eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb"} Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400855 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400872 4775 scope.go:117] "RemoveContainer" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.453958 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.463483 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.485849 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: E0123 14:35:07.486201 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.486222 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.486421 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.487046 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.488878 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.496771 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.579839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.580192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.681836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.681966 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.698970 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.721016 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.738009 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" path="/var/lib/kubelet/pods/37d37972-46f4-48e0-a566-6984e8794cc4/volumes" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.841770 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.230444 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.292407 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"6bcae715-33d1-4c44-9a33-f617c489dd8c\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.292551 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"6bcae715-33d1-4c44-9a33-f617c489dd8c\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.297988 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j" (OuterVolumeSpecName: "kube-api-access-7hj9j") pod "6bcae715-33d1-4c44-9a33-f617c489dd8c" (UID: "6bcae715-33d1-4c44-9a33-f617c489dd8c"). InnerVolumeSpecName "kube-api-access-7hj9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.330361 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data" (OuterVolumeSpecName: "config-data") pod "6bcae715-33d1-4c44-9a33-f617c489dd8c" (UID: "6bcae715-33d1-4c44-9a33-f617c489dd8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.361280 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.394681 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.394730 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.412908 4775 generic.go:334] "Generic (PLEG): container finished" podID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerID="9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.412988 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.415837 4775 generic.go:334] "Generic (PLEG): container finished" podID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerID="8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.415929 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerDied","Data":"8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419360 4775 generic.go:334] "Generic (PLEG): container finished" podID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerDied","Data":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419549 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerDied","Data":"993d5972eb5c6f4c100b944f0126ed4f2e54f4d9412dabbd89c853572013d71a"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419626 4775 scope.go:117] "RemoveContainer" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419829 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.445508 4775 scope.go:117] "RemoveContainer" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: E0123 14:35:08.457448 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": container with ID starting with 00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30 not found: ID does not exist" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.457528 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} err="failed to get container status \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": rpc error: code = NotFound desc = could not find container \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": container with ID starting with 00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30 not found: ID does not exist" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.501891 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.518123 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524281 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: E0123 14:35:08.524629 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524640 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524888 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.525425 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.527235 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.529300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.597344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.597506 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.623002 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.698415 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.698620 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.699235 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.699452 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.703363 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj" (OuterVolumeSpecName: "kube-api-access-hqtbj") pod "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" (UID: "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa"). InnerVolumeSpecName "kube-api-access-hqtbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.704681 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.724557 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.726974 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data" (OuterVolumeSpecName: "config-data") pod "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" (UID: "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.800036 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.800150 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.816427 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.865371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.900821 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.900948 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.901011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.901860 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs" (OuterVolumeSpecName: "logs") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.905953 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c" (OuterVolumeSpecName: "kube-api-access-49w8c") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "kube-api-access-49w8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.932347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data" (OuterVolumeSpecName: "config-data") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002575 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002622 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002636 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: W0123 14:35:09.316707 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc76b90_669a_4df4_a976_1199443a8f55.slice/crio-84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a WatchSource:0}: Error finding container 84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a: Status 404 returned error can't find the container with id 84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.316986 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.436475 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerStarted","Data":"84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441571 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"cb5995d7a3a4ccb1bfe7a5d8b13bf7003f6838f5a23bd25715820c86d434289d"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441633 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441654 4775 scope.go:117] "RemoveContainer" containerID="9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.458478 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerDied","Data":"862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.458601 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.462738 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerStarted","Data":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.462780 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerStarted","Data":"c6208b8557503ef028aa8573339ec1a013f7ed363a4379dec4e0efaa541f0f37"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.496410 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.496392059 podStartE2EDuration="2.496392059s" podCreationTimestamp="2026-01-23 14:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:09.490324206 +0000 UTC m=+1856.485152966" watchObservedRunningTime="2026-01-23 14:35:09.496392059 +0000 UTC m=+1856.491220799" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.512043 4775 scope.go:117] "RemoveContainer" containerID="df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.551480 4775 scope.go:117] "RemoveContainer" containerID="8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.571050 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.577902 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.600144 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.607776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608161 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608180 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608192 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608199 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608213 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608370 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608380 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608389 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.609226 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.612873 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.632633 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.638437 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.645753 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.646739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.649210 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.649434 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.716734 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717183 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717307 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.725392 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" path="/var/lib/kubelet/pods/6bcae715-33d1-4c44-9a33-f617c489dd8c/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.725961 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" path="/var/lib/kubelet/pods/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.726431 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" path="/var/lib/kubelet/pods/e3b50d2b-6889-4e09-b328-ce213458f6e3/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818191 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818247 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818327 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.819494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.833635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.833717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.838370 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.839788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.904271 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.920059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.920126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.926107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq" (OuterVolumeSpecName: "kube-api-access-vzqmq") pod "a3bbc7d7-fc9d-490e-9610-55805e5e876c" (UID: "a3bbc7d7-fc9d-490e-9610-55805e5e876c"). InnerVolumeSpecName "kube-api-access-vzqmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.932631 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.947940 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data" (OuterVolumeSpecName: "config-data") pod "a3bbc7d7-fc9d-490e-9610-55805e5e876c" (UID: "a3bbc7d7-fc9d-490e-9610-55805e5e876c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.965896 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.022343 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.022374 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.459881 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: W0123 14:35:10.461036 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ee5e49_16f0_402a_9d8e_6f237110e663.slice/crio-fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52 WatchSource:0}: Error finding container fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52: Status 404 returned error can't find the container with id fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52 Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.474225 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerStarted","Data":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.474594 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.475947 4775 generic.go:334] "Generic (PLEG): container finished" podID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" exitCode=0 Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.475992 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476024 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerDied","Data":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerDied","Data":"3b893ae1dbc88ba1326e6a0a0bd54925381cdc400ec55f87f58040e0b56c3ac3"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476139 4775 scope.go:117] "RemoveContainer" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.483380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.499024 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.503730 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.503713459 podStartE2EDuration="2.503713459s" podCreationTimestamp="2026-01-23 14:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:10.496180486 +0000 UTC m=+1857.491009226" watchObservedRunningTime="2026-01-23 14:35:10.503713459 +0000 UTC m=+1857.498542199" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.533255 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.539042 4775 scope.go:117] "RemoveContainer" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: E0123 14:35:10.539600 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": container with ID starting with c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874 not found: ID does not exist" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.539627 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} err="failed to get container status \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": rpc error: code = NotFound desc = could not find container \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": container with ID starting with c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874 not found: ID does not exist" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.545766 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552245 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: E0123 14:35:10.552618 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552630 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552785 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.553314 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.560020 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.563430 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.630360 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.630648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.731774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.731941 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.735669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.754348 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.756021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.762335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.765962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.833883 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.833971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.834019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.884170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936533 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936593 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.937417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.937772 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.954508 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.174764 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.328055 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:11 crc kubenswrapper[4775]: W0123 14:35:11.334494 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde4903d_4224_4139_a444_3c5baf78ff7b.slice/crio-6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617 WatchSource:0}: Error finding container 6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617: Status 404 returned error can't find the container with id 6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617 Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.489919 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:11 crc kubenswrapper[4775]: W0123 14:35:11.491750 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4561aa6c_c92c_4005_8587_a8367a331257.slice/crio-e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c WatchSource:0}: Error finding container e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c: Status 404 returned error can't find the container with id e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.524929 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerStarted","Data":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.524976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerStarted","Data":"02382e22f435f1e3a7c73d28641f54e87db1dd32276e640504ea0f19f830c722"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.526662 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.531992 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.539486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.539528 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.552950 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.572653 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.572635979 podStartE2EDuration="2.572635979s" podCreationTimestamp="2026-01-23 14:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:11.572021182 +0000 UTC m=+1858.566849922" watchObservedRunningTime="2026-01-23 14:35:11.572635979 +0000 UTC m=+1858.567464719" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.573734 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.5737304979999998 podStartE2EDuration="2.573730498s" podCreationTimestamp="2026-01-23 14:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:11.544940753 +0000 UTC m=+1858.539769493" watchObservedRunningTime="2026-01-23 14:35:11.573730498 +0000 UTC m=+1858.568559238" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.730952 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" path="/var/lib/kubelet/pods/a3bbc7d7-fc9d-490e-9610-55805e5e876c/volumes" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.566633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107"} Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.568276 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.573795 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" exitCode=0 Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.573930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6"} Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.600727 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.600703348 podStartE2EDuration="2.600703348s" podCreationTimestamp="2026-01-23 14:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:12.589571168 +0000 UTC m=+1859.584399948" watchObservedRunningTime="2026-01-23 14:35:12.600703348 +0000 UTC m=+1859.595532128" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.642472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.843010 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.158924 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.161054 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.183382 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.183866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.184013 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.184083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285745 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285870 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285910 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.286329 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.286652 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.312582 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.354012 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.355678 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.372755 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.387790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.388001 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.388109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.489635 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.489957 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490016 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490080 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.511508 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.599221 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.680041 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.259550 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.361288 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:14 crc kubenswrapper[4775]: W0123 14:35:14.366446 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91e4cde_f59f_4bc9_9f11_bc05386b065c.slice/crio-3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9 WatchSource:0}: Error finding container 3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9: Status 404 returned error can't find the container with id 3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.608060 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" exitCode=0 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.608129 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.609125 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.609148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612642 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d" exitCode=0 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612882 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612903 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"fd675cafec4d98add23e159f65f402c9edc343315ba027b2b7ac636cb9573a20"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.623003 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.626869 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" exitCode=0 Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.626952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.627518 4775 scope.go:117] "RemoveContainer" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.631082 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.635241 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c" exitCode=0 Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.635287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.713423 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nd8ng" podStartSLOduration=3.22246622 podStartE2EDuration="5.713396393s" podCreationTimestamp="2026-01-23 14:35:10 +0000 UTC" firstStartedPulling="2026-01-23 14:35:12.576435624 +0000 UTC m=+1859.571264404" lastFinishedPulling="2026-01-23 14:35:15.067365837 +0000 UTC m=+1862.062194577" observedRunningTime="2026-01-23 14:35:15.710052773 +0000 UTC m=+1862.704881523" watchObservedRunningTime="2026-01-23 14:35:15.713396393 +0000 UTC m=+1862.708225163" Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.885545 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.651486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.652057 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.657004 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055" exitCode=0 Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.657127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.665677 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec" exitCode=0 Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.667413 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.694730 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.674860 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826"} Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.678371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0"} Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.705425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzsg7" podStartSLOduration=3.179606103 podStartE2EDuration="4.705401943s" podCreationTimestamp="2026-01-23 14:35:13 +0000 UTC" firstStartedPulling="2026-01-23 14:35:15.637540719 +0000 UTC m=+1862.632369459" lastFinishedPulling="2026-01-23 14:35:17.163336549 +0000 UTC m=+1864.158165299" observedRunningTime="2026-01-23 14:35:17.700782189 +0000 UTC m=+1864.695610929" watchObservedRunningTime="2026-01-23 14:35:17.705401943 +0000 UTC m=+1864.700230693" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.843168 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.897100 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.934937 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmtbr" podStartSLOduration=2.400378958 podStartE2EDuration="4.934908987s" podCreationTimestamp="2026-01-23 14:35:13 +0000 UTC" firstStartedPulling="2026-01-23 14:35:14.614569847 +0000 UTC m=+1861.609398617" lastFinishedPulling="2026-01-23 14:35:17.149099886 +0000 UTC m=+1864.143928646" observedRunningTime="2026-01-23 14:35:17.734274471 +0000 UTC m=+1864.729103221" watchObservedRunningTime="2026-01-23 14:35:17.934908987 +0000 UTC m=+1864.929737767" Jan 23 14:35:18 crc kubenswrapper[4775]: I0123 14:35:18.713483 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:18 crc kubenswrapper[4775]: I0123 14:35:18.893879 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.933910 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.934259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.999440 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710666 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" exitCode=0 Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710774 4775 scope.go:117] "RemoveContainer" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.711900 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:20 crc kubenswrapper[4775]: E0123 14:35:20.712178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(bde4903d-4224-4139-a444-3c5baf78ff7b)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.885503 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.885560 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.016037 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.016070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.174900 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.174935 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.250432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.748274 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:21 crc kubenswrapper[4775]: E0123 14:35:21.748693 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(bde4903d-4224-4139-a444-3c5baf78ff7b)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.820020 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:22 crc kubenswrapper[4775]: I0123 14:35:22.946401 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.489930 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.489995 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.587704 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.681374 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.681490 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.744798 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.783338 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nd8ng" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" containerID="cri-o://82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" gracePeriod=2 Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.853860 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.860259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.241643 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294491 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294884 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.295870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities" (OuterVolumeSpecName: "utilities") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.303349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm" (OuterVolumeSpecName: "kube-api-access-gwrjm") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "kube-api-access-gwrjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.364112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397014 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397050 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397061 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796762 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" exitCode=0 Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796884 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.797370 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c"} Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.797405 4775 scope.go:117] "RemoveContainer" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.823875 4775 scope.go:117] "RemoveContainer" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.837067 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.848213 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.871387 4775 scope.go:117] "RemoveContainer" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.887571 4775 scope.go:117] "RemoveContainer" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.888305 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": container with ID starting with 82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea not found: ID does not exist" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888347 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} err="failed to get container status \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": rpc error: code = NotFound desc = could not find container \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": container with ID starting with 82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea not found: ID does not exist" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888373 4775 scope.go:117] "RemoveContainer" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.888958 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": container with ID starting with 9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0 not found: ID does not exist" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} err="failed to get container status \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": rpc error: code = NotFound desc = could not find container \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": container with ID starting with 9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0 not found: ID does not exist" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.889007 4775 scope.go:117] "RemoveContainer" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.889722 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": container with ID starting with d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6 not found: ID does not exist" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.889766 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6"} err="failed to get container status \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": rpc error: code = NotFound desc = could not find container \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": container with ID starting with d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6 not found: ID does not exist" Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.727221 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4561aa6c-c92c-4005-8587-a8367a331257" path="/var/lib/kubelet/pods/4561aa6c-c92c-4005-8587-a8367a331257/volumes" Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.949395 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.949650 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmtbr" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" containerID="cri-o://9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" gracePeriod=2 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.827428 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" exitCode=0 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.827834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0"} Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.964897 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.965521 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzsg7" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" containerID="cri-o://046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" gracePeriod=2 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.985409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044710 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044893 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.046549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities" (OuterVolumeSpecName: "utilities") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.052954 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq" (OuterVolumeSpecName: "kube-api-access-vt6gq") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "kube-api-access-vt6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.147198 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148113 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148189 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148208 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.842359 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" exitCode=0 Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.842418 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826"} Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"fd675cafec4d98add23e159f65f402c9edc343315ba027b2b7ac636cb9573a20"} Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853458 4775 scope.go:117] "RemoveContainer" containerID="9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853955 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.883609 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.886426 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.907062 4775 scope.go:117] "RemoveContainer" containerID="2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.999314 4775 scope.go:117] "RemoveContainer" containerID="9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.013935 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.069755 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.069978 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.070069 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.072187 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities" (OuterVolumeSpecName: "utilities") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.073783 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.074039 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb" (OuterVolumeSpecName: "kube-api-access-b5csb") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "kube-api-access-b5csb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.102008 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.176036 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.176094 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868059 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9"} Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868125 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868163 4775 scope.go:117] "RemoveContainer" containerID="046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.906075 4775 scope.go:117] "RemoveContainer" containerID="0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.931721 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.941203 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.943701 4775 scope.go:117] "RemoveContainer" containerID="ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.729943 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" path="/var/lib/kubelet/pods/0fa87919-c37c-422f-8c5d-f5f54162a229/volumes" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.731520 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" path="/var/lib/kubelet/pods/d91e4cde-f59f-4bc9-9f11-bc05386b065c/volumes" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.943444 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.943892 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.944993 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.945086 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.950411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.953310 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:35 crc kubenswrapper[4775]: I0123 14:35:35.714029 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:36 crc kubenswrapper[4775]: I0123 14:35:36.976257 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} Jan 23 14:35:36 crc kubenswrapper[4775]: I0123 14:35:36.977609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:37 crc kubenswrapper[4775]: I0123 14:35:37.032627 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.273619 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.285978 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.296610 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.305028 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.318360 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.330379 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.335904 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.388634 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.388935 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" containerID="cri-o://75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.389384 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403380 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403392 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403418 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403428 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403434 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403445 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403451 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403459 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403464 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403473 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403478 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403490 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403495 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403503 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403510 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403522 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403529 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403718 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403729 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403749 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.404299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.412170 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.486666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.486740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.487038 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.488756 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.494950 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.505406 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.505650 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.512004 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.522947 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.580947 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.581297 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587688 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587846 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.594451 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.608096 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.608400 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" containerID="cri-o://aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.611372 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" containerID="cri-o://5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.637586 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.637977 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.641261 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.666752 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.684055 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.685216 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.689864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.689934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.690010 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.690039 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.691352 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.702258 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.708420 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.710996 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.716896 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.717298 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.743054 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.791415 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.791577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.792493 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.809407 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.848108 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.891581 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.895860 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.901332 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.901402 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.995278 4775 generic.go:334] "Generic (PLEG): container finished" podID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerID="aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" exitCode=143 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.995360 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458"} Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999451 4775 generic.go:334] "Generic (PLEG): container finished" podID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerID="75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" exitCode=143 Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999492 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b"} Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999852 4775 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-289sx\" not found" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.004757 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.096521 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.096569 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:39.596554805 +0000 UTC m=+1886.591383545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.248761 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.325789 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.484206 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.488724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:39 crc kubenswrapper[4775]: W0123 14:35:39.489147 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc1717a4_664a_4a44_9206_0b5c472cbd50.slice/crio-493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848 WatchSource:0}: Error finding container 493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848: Status 404 returned error can't find the container with id 493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848 Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.613298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.613505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.613948 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.614000 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:40.613985846 +0000 UTC m=+1887.608814586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.621241 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf" (OuterVolumeSpecName: "kube-api-access-64gdf") pod "51e63565-a2ef-4d12-af2f-f3dc6c2942d9" (UID: "51e63565-a2ef-4d12-af2f-f3dc6c2942d9"). InnerVolumeSpecName "kube-api-access-64gdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.641956 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data" (OuterVolumeSpecName: "config-data") pod "51e63565-a2ef-4d12-af2f-f3dc6c2942d9" (UID: "51e63565-a2ef-4d12-af2f-f3dc6c2942d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.715503 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.715855 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.725946 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" path="/var/lib/kubelet/pods/004165d0-70f3-4e04-8f77-1342a98147bb/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.726723 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" path="/var/lib/kubelet/pods/71a6469b-2bd1-4004-9a3d-c9d87161efab/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.727517 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" path="/var/lib/kubelet/pods/db75fd7c-ba91-4090-ac20-0009c06598f3/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.728251 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" path="/var/lib/kubelet/pods/ec6263e3-855a-48e5-ae77-25462d7e5a13/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.730526 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" path="/var/lib/kubelet/pods/f25e3b63-3402-4d38-8f18-e4f015797854/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.947073 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.968047 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.978177 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.981897 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.981944 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.008864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerStarted","Data":"dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.008910 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerStarted","Data":"493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011432 4775 generic.go:334] "Generic (PLEG): container finished" podID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerDied","Data":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011505 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerDied","Data":"176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011524 4775 scope.go:117] "RemoveContainer" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011629 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015384 4775 generic.go:334] "Generic (PLEG): container finished" podID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerID="c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerDied","Data":"c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015540 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerStarted","Data":"f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017311 4775 generic.go:334] "Generic (PLEG): container finished" podID="8dc76b90-669a-4df4-a976-1199443a8f55" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017367 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerDied","Data":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerDied","Data":"84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017454 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018622 4775 generic.go:334] "Generic (PLEG): container finished" podID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerID="5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018755 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" gracePeriod=30 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018921 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerDied","Data":"5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.019080 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerStarted","Data":"2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.026779 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" podStartSLOduration=2.026761978 podStartE2EDuration="2.026761978s" podCreationTimestamp="2026-01-23 14:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:40.021722032 +0000 UTC m=+1887.016550782" watchObservedRunningTime="2026-01-23 14:35:40.026761978 +0000 UTC m=+1887.021590718" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.036959 4775 scope.go:117] "RemoveContainer" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.040710 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": container with ID starting with adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35 not found: ID does not exist" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.040743 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} err="failed to get container status \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": rpc error: code = NotFound desc = could not find container \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": container with ID starting with adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35 not found: ID does not exist" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.040772 4775 scope.go:117] "RemoveContainer" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.049563 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.055679 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.077297 4775 scope.go:117] "RemoveContainer" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.077850 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": container with ID starting with d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5 not found: ID does not exist" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.077889 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} err="failed to get container status \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": rpc error: code = NotFound desc = could not find container \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": container with ID starting with d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5 not found: ID does not exist" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.123150 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"8dc76b90-669a-4df4-a976-1199443a8f55\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.123214 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"8dc76b90-669a-4df4-a976-1199443a8f55\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.128463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt" (OuterVolumeSpecName: "kube-api-access-lkqlt") pod "8dc76b90-669a-4df4-a976-1199443a8f55" (UID: "8dc76b90-669a-4df4-a976-1199443a8f55"). InnerVolumeSpecName "kube-api-access-lkqlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.145104 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data" (OuterVolumeSpecName: "config-data") pod "8dc76b90-669a-4df4-a976-1199443a8f55" (UID: "8dc76b90-669a-4df4-a976-1199443a8f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.225479 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.225543 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.379426 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.385029 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.399005 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.530145 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"053e93b4-4f28-478d-9065-20980afe9e20\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.530300 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"053e93b4-4f28-478d-9065-20980afe9e20\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.537016 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6" (OuterVolumeSpecName: "kube-api-access-vvpt6") pod "053e93b4-4f28-478d-9065-20980afe9e20" (UID: "053e93b4-4f28-478d-9065-20980afe9e20"). InnerVolumeSpecName "kube-api-access-vvpt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.574557 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data" (OuterVolumeSpecName: "config-data") pod "053e93b4-4f28-478d-9065-20980afe9e20" (UID: "053e93b4-4f28-478d-9065-20980afe9e20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.631749 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.631786 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.632322 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.632406 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:42.632385075 +0000 UTC m=+1889.627213815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.889212 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.891534 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.893291 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.893510 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.030312 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerID="dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118" exitCode=0 Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.030522 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerDied","Data":"dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035701 4775 generic.go:334] "Generic (PLEG): container finished" podID="053e93b4-4f28-478d-9065-20980afe9e20" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" exitCode=0 Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerDied","Data":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035851 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerDied","Data":"c6208b8557503ef028aa8573339ec1a013f7ed363a4379dec4e0efaa541f0f37"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035877 4775 scope.go:117] "RemoveContainer" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.036234 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.097523 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.098045 4775 scope.go:117] "RemoveContainer" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: E0123 14:35:41.098488 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": container with ID starting with ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe not found: ID does not exist" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.098535 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} err="failed to get container status \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": rpc error: code = NotFound desc = could not find container \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": container with ID starting with ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe not found: ID does not exist" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.102746 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.458939 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.464622 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648638 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648720 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648990 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.649794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "216ac3cd-4e4b-40b9-b05d-be15cfe121ed" (UID: "216ac3cd-4e4b-40b9-b05d-be15cfe121ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.650960 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d8e2db8-c4c4-48ea-83a1-d750eb6de857" (UID: "7d8e2db8-c4c4-48ea-83a1-d750eb6de857"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.653992 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5" (OuterVolumeSpecName: "kube-api-access-2l5p5") pod "7d8e2db8-c4c4-48ea-83a1-d750eb6de857" (UID: "7d8e2db8-c4c4-48ea-83a1-d750eb6de857"). InnerVolumeSpecName "kube-api-access-2l5p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.654727 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g" (OuterVolumeSpecName: "kube-api-access-kw59g") pod "216ac3cd-4e4b-40b9-b05d-be15cfe121ed" (UID: "216ac3cd-4e4b-40b9-b05d-be15cfe121ed"). InnerVolumeSpecName "kube-api-access-kw59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.725669 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053e93b4-4f28-478d-9065-20980afe9e20" path="/var/lib/kubelet/pods/053e93b4-4f28-478d-9065-20980afe9e20/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.726470 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" path="/var/lib/kubelet/pods/51e63565-a2ef-4d12-af2f-f3dc6c2942d9/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.727130 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" path="/var/lib/kubelet/pods/8dc76b90-669a-4df4-a976-1199443a8f55/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751380 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751423 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751477 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751495 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.786029 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": read tcp 10.217.0.2:37162->10.217.0.202:8774: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.786100 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": read tcp 10.217.0.2:37146->10.217.0.202:8774: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.812572 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": read tcp 10.217.0.2:39896->10.217.0.198:8775: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.812632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": read tcp 10.217.0.2:39900->10.217.0.198:8775: read: connection reset by peer" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.060204 4775 generic.go:334] "Generic (PLEG): container finished" podID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerID="097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" exitCode=0 Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.060429 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063757 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerDied","Data":"f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063792 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063919 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073446 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerDied","Data":"2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073495 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073577 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.082237 4775 generic.go:334] "Generic (PLEG): container finished" podID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerID="5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" exitCode=0 Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.082375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.251295 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.262569 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.266538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272239 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272277 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272302 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272323 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.268566 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs" (OuterVolumeSpecName: "logs") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272965 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.273227 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs" (OuterVolumeSpecName: "logs") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.278065 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh" (OuterVolumeSpecName: "kube-api-access-js8lh") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "kube-api-access-js8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.282711 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9" (OuterVolumeSpecName: "kube-api-access-lhnl9") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "kube-api-access-lhnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.305509 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data" (OuterVolumeSpecName: "config-data") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.306265 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data" (OuterVolumeSpecName: "config-data") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.368970 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375297 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375362 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375381 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375394 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375408 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.476152 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"bc1717a4-664a-4a44-9206-0b5c472cbd50\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.476245 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"bc1717a4-664a-4a44-9206-0b5c472cbd50\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.477088 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc1717a4-664a-4a44-9206-0b5c472cbd50" (UID: "bc1717a4-664a-4a44-9206-0b5c472cbd50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.479718 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q" (OuterVolumeSpecName: "kube-api-access-d6g9q") pod "bc1717a4-664a-4a44-9206-0b5c472cbd50" (UID: "bc1717a4-664a-4a44-9206-0b5c472cbd50"). InnerVolumeSpecName "kube-api-access-d6g9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.570342 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.577551 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.577580 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.678897 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.679057 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " Jan 23 14:35:42 crc kubenswrapper[4775]: E0123 14:35:42.679539 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:42 crc kubenswrapper[4775]: E0123 14:35:42.679624 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:46.679602583 +0000 UTC m=+1893.674431323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.692354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh" (OuterVolumeSpecName: "kube-api-access-mhrnh") pod "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" (UID: "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d"). InnerVolumeSpecName "kube-api-access-mhrnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.711379 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data" (OuterVolumeSpecName: "config-data") pod "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" (UID: "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.780703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.780743 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.101618 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.102207 4775 scope.go:117] "RemoveContainer" containerID="5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.101635 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104422 4775 generic.go:334] "Generic (PLEG): container finished" podID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" exitCode=0 Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104506 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104546 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerDied","Data":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104596 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerDied","Data":"02382e22f435f1e3a7c73d28641f54e87db1dd32276e640504ea0f19f830c722"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.107836 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"3ef3e20b260f3e98c87c0a0151aead3f8b34244b446f78a1aa8e60eef7375188"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.107977 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112793 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerDied","Data":"493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112869 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112896 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.150826 4775 scope.go:117] "RemoveContainer" containerID="aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.178719 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.194126 4775 scope.go:117] "RemoveContainer" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.197788 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.208201 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.219444 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.220433 4775 scope.go:117] "RemoveContainer" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: E0123 14:35:43.221106 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": container with ID starting with 575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e not found: ID does not exist" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.221178 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} err="failed to get container status \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": rpc error: code = NotFound desc = could not find container \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": container with ID starting with 575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e not found: ID does not exist" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.221209 4775 scope.go:117] "RemoveContainer" containerID="097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.227783 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.234789 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.245233 4775 scope.go:117] "RemoveContainer" containerID="75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.417573 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.422879 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.449222 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.456473 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.462796 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.469551 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.531397 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.550390 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.563382 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.572823 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.578379 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.584122 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.611957 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.620307 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.637589 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.643082 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.647231 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.651535 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.728271 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" path="/var/lib/kubelet/pods/112204a1-12d6-49b5-b97e-de4daab49dcf/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.729007 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" path="/var/lib/kubelet/pods/216ac3cd-4e4b-40b9-b05d-be15cfe121ed/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.729668 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" path="/var/lib/kubelet/pods/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.730339 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" path="/var/lib/kubelet/pods/7d8e2db8-c4c4-48ea-83a1-d750eb6de857/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.731627 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" path="/var/lib/kubelet/pods/7ff6b200-7364-4e13-956d-628abd48cbaa/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.732381 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" path="/var/lib/kubelet/pods/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.733153 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" path="/var/lib/kubelet/pods/93ee5e49-16f0-402a-9d8e-6f237110e663/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.734508 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" path="/var/lib/kubelet/pods/b95fa161-1171-4dc2-b0be-3aa279cb717d/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.735478 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" path="/var/lib/kubelet/pods/bc1717a4-664a-4a44-9206-0b5c472cbd50/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.736207 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" path="/var/lib/kubelet/pods/c4b0dbf6-948b-45c4-b5a0-6027f816c873/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.737400 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" path="/var/lib/kubelet/pods/e86f57ad-0eba-4794-8f64-f70609e535e8/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.738049 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" path="/var/lib/kubelet/pods/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d/volumes" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.888241 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.892038 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.896730 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.896858 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.940933 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941264 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941285 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941305 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941316 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941333 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941340 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941358 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941367 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941382 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941390 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941404 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941412 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941425 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941433 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941443 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941450 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941460 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941469 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941486 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941494 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941513 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941521 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941698 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941713 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941733 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941750 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941769 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941783 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941818 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941833 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941845 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941859 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941870 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.942682 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.957596 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.034628 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.035595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037309 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037364 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.049161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.122400 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.123430 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.126086 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.130543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139065 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139166 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139190 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139231 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139250 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.140412 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.158279 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.158635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.219516 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.220580 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.232250 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.240564 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.240736 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.241709 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.257562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.299039 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.342585 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.342639 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.343380 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.344330 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.348999 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.350189 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.355182 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.439375 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444716 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.451286 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.473915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.522064 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.522992 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.524699 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.536433 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.546844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.547782 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.548027 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.549703 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.570400 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.649943 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.650078 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.678748 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.752814 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: E0123 14:35:46.753637 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:46 crc kubenswrapper[4775]: E0123 14:35:46.753697 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:54.753679991 +0000 UTC m=+1901.748508751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.754125 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.755488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.776762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.785532 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:46 crc kubenswrapper[4775]: W0123 14:35:46.797087 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b564d3_5399_47b6_9397_4c3b006f9e13.slice/crio-4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41 WatchSource:0}: Error finding container 4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41: Status 404 returned error can't find the container with id 4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41 Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.851702 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.892423 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.907887 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.952355 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: W0123 14:35:46.977623 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95df8848_8035_4302_9689_db060f7d4148.slice/crio-55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c WatchSource:0}: Error finding container 55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c: Status 404 returned error can't find the container with id 55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.997324 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:47 crc kubenswrapper[4775]: W0123 14:35:47.028298 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod891c1a15_7b44_4c8f_be11_d06333a1d0d1.slice/crio-12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861 WatchSource:0}: Error finding container 12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861: Status 404 returned error can't find the container with id 12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861 Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.143345 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.185126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerStarted","Data":"594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.189077 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerStarted","Data":"fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.189100 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerStarted","Data":"4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.192776 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerStarted","Data":"12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.193716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerStarted","Data":"4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.194596 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerStarted","Data":"55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.197414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerStarted","Data":"0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.197439 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerStarted","Data":"f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.209559 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-bfq79" podStartSLOduration=2.209544843 podStartE2EDuration="2.209544843s" podCreationTimestamp="2026-01-23 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:47.206241604 +0000 UTC m=+1894.201070344" watchObservedRunningTime="2026-01-23 14:35:47.209544843 +0000 UTC m=+1894.204373583" Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.221957 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" podStartSLOduration=1.221939947 podStartE2EDuration="1.221939947s" podCreationTimestamp="2026-01-23 14:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:47.218880695 +0000 UTC m=+1894.213709425" watchObservedRunningTime="2026-01-23 14:35:47.221939947 +0000 UTC m=+1894.216768687" Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.210235 4775 generic.go:334] "Generic (PLEG): container finished" podID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerID="3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.210335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerDied","Data":"3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.220121 4775 generic.go:334] "Generic (PLEG): container finished" podID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerID="f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.220241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerDied","Data":"f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.225082 4775 generic.go:334] "Generic (PLEG): container finished" podID="95df8848-8035-4302-9689-db060f7d4148" containerID="5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.225226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerDied","Data":"5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.238594 4775 generic.go:334] "Generic (PLEG): container finished" podID="603674a6-1055-4e27-b370-2b57865ebc55" containerID="0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.238732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerDied","Data":"0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.241627 4775 generic.go:334] "Generic (PLEG): container finished" podID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerID="9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.241734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerDied","Data":"9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.249478 4775 generic.go:334] "Generic (PLEG): container finished" podID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerID="fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.249559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerDied","Data":"fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c"} Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.733302 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.817647 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.822216 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.835063 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.837750 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.844229 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.856479 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.856525 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.858088 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48eb2aff-1769-415f-b284-8d0cbf32a4e9" (UID: "48eb2aff-1769-415f-b284-8d0cbf32a4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.859070 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.863314 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n" (OuterVolumeSpecName: "kube-api-access-pm97n") pod "48eb2aff-1769-415f-b284-8d0cbf32a4e9" (UID: "48eb2aff-1769-415f-b284-8d0cbf32a4e9"). InnerVolumeSpecName "kube-api-access-pm97n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960467 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"603674a6-1055-4e27-b370-2b57865ebc55\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"98b564d3-5399-47b6-9397-4c3b006f9e13\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960610 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"95df8848-8035-4302-9689-db060f7d4148\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960630 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960659 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"98b564d3-5399-47b6-9397-4c3b006f9e13\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"95df8848-8035-4302-9689-db060f7d4148\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960744 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"603674a6-1055-4e27-b370-2b57865ebc55\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960771 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960829 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961059 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961635 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "891c1a15-7b44-4c8f-be11-d06333a1d0d1" (UID: "891c1a15-7b44-4c8f-be11-d06333a1d0d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961775 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15c2fb30-3be5-4e47-b2d3-8fbd54665494" (UID: "15c2fb30-3be5-4e47-b2d3-8fbd54665494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962034 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98b564d3-5399-47b6-9397-4c3b006f9e13" (UID: "98b564d3-5399-47b6-9397-4c3b006f9e13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962184 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95df8848-8035-4302-9689-db060f7d4148" (UID: "95df8848-8035-4302-9689-db060f7d4148"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962215 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603674a6-1055-4e27-b370-2b57865ebc55" (UID: "603674a6-1055-4e27-b370-2b57865ebc55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.964598 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln" (OuterVolumeSpecName: "kube-api-access-xqsln") pod "98b564d3-5399-47b6-9397-4c3b006f9e13" (UID: "98b564d3-5399-47b6-9397-4c3b006f9e13"). InnerVolumeSpecName "kube-api-access-xqsln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.964949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl" (OuterVolumeSpecName: "kube-api-access-vtpfl") pod "891c1a15-7b44-4c8f-be11-d06333a1d0d1" (UID: "891c1a15-7b44-4c8f-be11-d06333a1d0d1"). InnerVolumeSpecName "kube-api-access-vtpfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.965698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4" (OuterVolumeSpecName: "kube-api-access-8xmx4") pod "95df8848-8035-4302-9689-db060f7d4148" (UID: "95df8848-8035-4302-9689-db060f7d4148"). InnerVolumeSpecName "kube-api-access-8xmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.965996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5" (OuterVolumeSpecName: "kube-api-access-cstc5") pod "603674a6-1055-4e27-b370-2b57865ebc55" (UID: "603674a6-1055-4e27-b370-2b57865ebc55"). InnerVolumeSpecName "kube-api-access-cstc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.966493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97" (OuterVolumeSpecName: "kube-api-access-9nx97") pod "15c2fb30-3be5-4e47-b2d3-8fbd54665494" (UID: "15c2fb30-3be5-4e47-b2d3-8fbd54665494"). InnerVolumeSpecName "kube-api-access-9nx97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063216 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063266 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063287 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063305 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063322 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063339 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063356 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063373 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063390 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063408 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.274283 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerDied","Data":"594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.274703 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.275013 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280353 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280342 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerDied","Data":"12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280530 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283577 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerDied","Data":"4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283648 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.287687 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerDied","Data":"4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.287771 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.285778 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.288661 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerDied","Data":"55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.288767 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.289163 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291181 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerDied","Data":"f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291230 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291257 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.888320 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.890790 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.897152 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.897261 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592096 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592456 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592501 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592510 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592540 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592551 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592569 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592579 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592598 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592607 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592627 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592638 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592947 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592968 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592987 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593004 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593019 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593038 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593644 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597085 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597172 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-8xglt" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597475 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.631220 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693663 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795181 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.800453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.801730 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.816933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.914361 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:52 crc kubenswrapper[4775]: I0123 14:35:52.413929 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.325852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerStarted","Data":"60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7"} Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.326394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerStarted","Data":"ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045"} Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.352543 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" podStartSLOduration=2.352528503 podStartE2EDuration="2.352528503s" podCreationTimestamp="2026-01-23 14:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:53.348455424 +0000 UTC m=+1900.343284194" watchObservedRunningTime="2026-01-23 14:35:53.352528503 +0000 UTC m=+1900.347357233" Jan 23 14:35:54 crc kubenswrapper[4775]: E0123 14:35:54.775395 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:54 crc kubenswrapper[4775]: E0123 14:35:54.775920 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:36:10.775884953 +0000 UTC m=+1917.770713723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.887929 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.891402 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.893498 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.893572 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:57 crc kubenswrapper[4775]: I0123 14:35:57.372969 4775 generic.go:334] "Generic (PLEG): container finished" podID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerID="60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7" exitCode=0 Jan 23 14:35:57 crc kubenswrapper[4775]: I0123 14:35:57.373060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerDied","Data":"60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7"} Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.837787 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848115 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848209 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.857841 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp" (OuterVolumeSpecName: "kube-api-access-2swfp") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "kube-api-access-2swfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.858056 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts" (OuterVolumeSpecName: "scripts") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.894411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data" (OuterVolumeSpecName: "config-data") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954186 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954215 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954225 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerDied","Data":"ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045"} Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398485 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398568 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.483965 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:59 crc kubenswrapper[4775]: E0123 14:35:59.484481 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.484513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.484789 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.485467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.492182 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.503503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-8xglt" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.510884 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.568687 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.568791 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.669961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.670476 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.674230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.689951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.810185 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:00 crc kubenswrapper[4775]: I0123 14:36:00.305510 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:36:00 crc kubenswrapper[4775]: W0123 14:36:00.313048 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab3b1c6_093c_4891_957c_fad86eb8fd31.slice/crio-a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60 WatchSource:0}: Error finding container a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60: Status 404 returned error can't find the container with id a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60 Jan 23 14:36:00 crc kubenswrapper[4775]: I0123 14:36:00.414279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"fab3b1c6-093c-4891-957c-fad86eb8fd31","Type":"ContainerStarted","Data":"a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60"} Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.889395 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.891398 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.896714 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.896761 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.424826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"fab3b1c6-093c-4891-957c-fad86eb8fd31","Type":"ContainerStarted","Data":"6e7d6c4e51f6df27b5bf4d3033ffb1fe6002c520e973c427463e015965f2ce9d"} Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.425056 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.439917 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.439900502 podStartE2EDuration="2.439900502s" podCreationTimestamp="2026-01-23 14:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:01.439006228 +0000 UTC m=+1908.433834968" watchObservedRunningTime="2026-01-23 14:36:01.439900502 +0000 UTC m=+1908.434729242" Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.887652 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.889644 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.891437 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.891504 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:09 crc kubenswrapper[4775]: I0123 14:36:09.853776 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.382923 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.384531 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.387498 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.390476 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.397476 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518919 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518960 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" exitCode=137 Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.519021 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617"} Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.519041 4775 scope.go:117] "RemoveContainer" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.536912 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537077 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537523 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537549 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537569 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537578 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537610 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537865 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537909 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537931 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.538658 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.541116 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.556727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572053 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572102 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572200 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.580439 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.581514 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.584512 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.606972 4775 scope.go:117] "RemoveContainer" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.608677 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": container with ID starting with 0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76 not found: ID does not exist" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.608710 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} err="failed to get container status \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": rpc error: code = NotFound desc = could not find container \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": container with ID starting with 0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76 not found: ID does not exist" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.608739 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.610711 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": container with ID starting with fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f not found: ID does not exist" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.610774 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} err="failed to get container status \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": rpc error: code = NotFound desc = could not find container \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": container with ID starting with fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f not found: ID does not exist" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.658485 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673220 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"bde4903d-4224-4139-a444-3c5baf78ff7b\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673344 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"bde4903d-4224-4139-a444-3c5baf78ff7b\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673651 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673674 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673760 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.681906 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr" (OuterVolumeSpecName: "kube-api-access-qgvjr") pod "bde4903d-4224-4139-a444-3c5baf78ff7b" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b"). InnerVolumeSpecName "kube-api-access-qgvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.688593 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.697711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.702356 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.709844 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.711390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.714145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.728258 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.736058 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data" (OuterVolumeSpecName: "config-data") pod "bde4903d-4224-4139-a444-3c5baf78ff7b" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.742348 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.756170 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.757510 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.759556 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.764300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775169 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775233 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775288 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775300 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.778642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.804192 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.881084 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.881153 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.882905 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886895 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.887052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.887091 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.888992 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.905242 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.906516 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.950203 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989200 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989274 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989300 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989331 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.991325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.992010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.994049 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.994576 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.005094 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.005657 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.032345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.072287 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.187065 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:11 crc kubenswrapper[4775]: W0123 14:36:11.196350 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda194a858_8c18_41e1_9a10_428397753ece.slice/crio-540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94 WatchSource:0}: Error finding container 540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94: Status 404 returned error can't find the container with id 540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94 Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.271092 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.272250 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281026 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281317 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292161 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.314772 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393218 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.398419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.398932 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.410399 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.429905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.531184 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.534438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"cb15b357-f464-4e43-a038-3b9e72455d49","Type":"ContainerStarted","Data":"9cfe823b908c6c40ff233692171c40835d933d01141f24e68717f0715b55d84e"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.535560 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:36:11 crc kubenswrapper[4775]: W0123 14:36:11.535584 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54fdd9aa_36fd_4817_9902_8e9e7c4d6f2b.slice/crio-c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3 WatchSource:0}: Error finding container c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3: Status 404 returned error can't find the container with id c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3 Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.536687 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerStarted","Data":"2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.536719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerStarted","Data":"c2b7b80f0829ed1270ce5358668a91d6a85cf45703f80218b39e2c994e384bba"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.538479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerStarted","Data":"8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.538509 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerStarted","Data":"540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.540100 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.562166 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" podStartSLOduration=1.562147885 podStartE2EDuration="1.562147885s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:11.551127938 +0000 UTC m=+1918.545956688" watchObservedRunningTime="2026-01-23 14:36:11.562147885 +0000 UTC m=+1918.556976625" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.574171 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.5741573180000001 podStartE2EDuration="1.574157318s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:11.566156093 +0000 UTC m=+1918.560984833" watchObservedRunningTime="2026-01-23 14:36:11.574157318 +0000 UTC m=+1918.568986058" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.592098 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.598683 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.653163 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.749593 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" path="/var/lib/kubelet/pods/bde4903d-4224-4139-a444-3c5baf78ff7b/volumes" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.066956 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:12 crc kubenswrapper[4775]: W0123 14:36:12.069728 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod263d2fcc_c533_4291_8e78_d8e9a2ee2894.slice/crio-8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261 WatchSource:0}: Error finding container 8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261: Status 404 returned error can't find the container with id 8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261 Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579333 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"673864771c4790e0938c1c15347461b29921695eaa3a3c61f8309e7d406615cd"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.586434 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerStarted","Data":"10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.586461 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerStarted","Data":"8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.590357 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"cb15b357-f464-4e43-a038-3b9e72455d49","Type":"ContainerStarted","Data":"85f6d20aed8bcbed2caea1d3221d7e598f335f036c48d01305911f6951677f7d"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.608624 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.6086030300000003 podStartE2EDuration="2.60860303s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.600379538 +0000 UTC m=+1919.595208288" watchObservedRunningTime="2026-01-23 14:36:12.60860303 +0000 UTC m=+1919.603431780" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.635925 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.635894575 podStartE2EDuration="2.635894575s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.624546079 +0000 UTC m=+1919.619374849" watchObservedRunningTime="2026-01-23 14:36:12.635894575 +0000 UTC m=+1919.630723335" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.649179 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.649151222 podStartE2EDuration="2.649151222s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.642699268 +0000 UTC m=+1919.637528248" watchObservedRunningTime="2026-01-23 14:36:12.649151222 +0000 UTC m=+1919.643979982" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.667427 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" podStartSLOduration=1.667399644 podStartE2EDuration="1.667399644s" podCreationTimestamp="2026-01-23 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.664930097 +0000 UTC m=+1919.659758837" watchObservedRunningTime="2026-01-23 14:36:12.667399644 +0000 UTC m=+1919.662228394" Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.625761 4775 generic.go:334] "Generic (PLEG): container finished" podID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerID="10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54" exitCode=0 Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.626050 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerDied","Data":"10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54"} Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.636433 4775 generic.go:334] "Generic (PLEG): container finished" podID="a194a858-8c18-41e1-9a10-428397753ece" containerID="8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80" exitCode=0 Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.637116 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerDied","Data":"8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80"} Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.907875 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.951408 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:16 crc kubenswrapper[4775]: I0123 14:36:16.033213 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:16 crc kubenswrapper[4775]: I0123 14:36:16.033349 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.134100 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.141598 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.314586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315079 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315161 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315296 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315333 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315392 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.320602 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts" (OuterVolumeSpecName: "scripts") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.320896 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts" (OuterVolumeSpecName: "scripts") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.322108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc" (OuterVolumeSpecName: "kube-api-access-xvhbc") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "kube-api-access-xvhbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.322134 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4" (OuterVolumeSpecName: "kube-api-access-2rxc4") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "kube-api-access-2rxc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.340687 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data" (OuterVolumeSpecName: "config-data") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.355334 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data" (OuterVolumeSpecName: "config-data") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417472 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417535 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417554 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417573 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417595 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417612 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerDied","Data":"540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94"} Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660289 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660421 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.662931 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerDied","Data":"8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261"} Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.662979 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.663082 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767376 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: E0123 14:36:17.767707 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767724 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: E0123 14:36:17.767758 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767767 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767967 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767981 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.768577 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.772896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.824285 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.923966 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.924295 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" containerID="cri-o://3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" gracePeriod=30 Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.924609 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" containerID="cri-o://39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" gracePeriod=30 Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.925910 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.926046 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.938718 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.939272 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.021935 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.022181 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" containerID="cri-o://a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.022325 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.027478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.027574 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.036191 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.046432 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.138212 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.432359 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.534658 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535653 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535725 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535458 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs" (OuterVolumeSpecName: "logs") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535968 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.539777 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg" (OuterVolumeSpecName: "kube-api-access-p4mwg") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "kube-api-access-p4mwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.563028 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data" (OuterVolumeSpecName: "config-data") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.588689 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.639134 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.639191 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.643334 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: W0123 14:36:18.646735 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd448a3_6897_490f_9c92_98590cee53ca.slice/crio-fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17 WatchSource:0}: Error finding container fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17: Status 404 returned error can't find the container with id fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.671968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1fd448a3-6897-490f-9c92-98590cee53ca","Type":"ContainerStarted","Data":"fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673308 4775 generic.go:334] "Generic (PLEG): container finished" podID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673329 4775 generic.go:334] "Generic (PLEG): container finished" podID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" exitCode=143 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673356 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673396 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673497 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.679342 4775 generic.go:334] "Generic (PLEG): container finished" podID="e2f66b57-925b-4d68-9917-77fded405cfd" containerID="2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.679416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerDied","Data":"2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681514 4775 generic.go:334] "Generic (PLEG): container finished" podID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681536 4775 generic.go:334] "Generic (PLEG): container finished" podID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" exitCode=143 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681550 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"673864771c4790e0938c1c15347461b29921695eaa3a3c61f8309e7d406615cd"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681652 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.711909 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.717293 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.727942 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736247 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736646 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736661 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736671 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736687 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736698 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736730 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736749 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736756 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736959 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736971 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736987 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736995 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.738045 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.739824 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740112 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740227 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740899 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs" (OuterVolumeSpecName: "logs") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.746582 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.746616 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.747305 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747337 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} err="failed to get container status \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747360 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.747601 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747628 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} err="failed to get container status \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747645 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747851 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} err="failed to get container status \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747870 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh" (OuterVolumeSpecName: "kube-api-access-xfbgh") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "kube-api-access-xfbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748118 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} err="failed to get container status \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748153 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.751669 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.770762 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data" (OuterVolumeSpecName: "config-data") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.789040 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815342 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.815674 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815706 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} err="failed to get container status \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815725 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.816058 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.816099 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} err="failed to get container status \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.816126 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.817739 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} err="failed to get container status \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.817757 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.818051 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} err="failed to get container status \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841639 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841754 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841831 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841842 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841851 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.943725 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"e2f66b57-925b-4d68-9917-77fded405cfd\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.944645 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"e2f66b57-925b-4d68-9917-77fded405cfd\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945132 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945337 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945626 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.946473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.949466 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh" (OuterVolumeSpecName: "kube-api-access-qdgzh") pod "e2f66b57-925b-4d68-9917-77fded405cfd" (UID: "e2f66b57-925b-4d68-9917-77fded405cfd"). InnerVolumeSpecName "kube-api-access-qdgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.951081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.964885 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.966445 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data" (OuterVolumeSpecName: "config-data") pod "e2f66b57-925b-4d68-9917-77fded405cfd" (UID: "e2f66b57-925b-4d68-9917-77fded405cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.041831 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.047141 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.047231 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.053225 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.059329 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: E0123 14:36:19.059920 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.059944 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.060123 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.061278 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.063018 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.065919 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.067109 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250408 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250717 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250797 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352315 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352659 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.368003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.368268 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.454487 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.532450 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: W0123 14:36:19.542738 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5105347b_2714_4def_a8e9_8f2e72aa6a0e.slice/crio-d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184 WatchSource:0}: Error finding container d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184: Status 404 returned error can't find the container with id d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184 Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.700497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1fd448a3-6897-490f-9c92-98590cee53ca","Type":"ContainerStarted","Data":"ccf720a5ab1296bf47d71ac94be34331eb7970f511d53c5e0642348a94e0e693"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.700876 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.705854 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerDied","Data":"c2b7b80f0829ed1270ce5358668a91d6a85cf45703f80218b39e2c994e384bba"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708189 4775 scope.go:117] "RemoveContainer" containerID="2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708290 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.730211 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" path="/var/lib/kubelet/pods/25188693-7059-4db5-88d0-6e36d8d2d4ed/volumes" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.731177 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" path="/var/lib/kubelet/pods/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b/volumes" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.753425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.753402492 podStartE2EDuration="2.753402492s" podCreationTimestamp="2026-01-23 14:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:19.732149399 +0000 UTC m=+1926.726978149" watchObservedRunningTime="2026-01-23 14:36:19.753402492 +0000 UTC m=+1926.748231232" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.762822 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.775478 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.786887 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.787828 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.790116 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.794324 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.870222 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.870441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.956629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: W0123 14:36:19.974746 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dd9eb2a_b8f2_46dc_bf7e_84a3ed13464c.slice/crio-caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe WatchSource:0}: Error finding container caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe: Status 404 returned error can't find the container with id caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.977475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.977525 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.981877 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.996503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.103843 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.560868 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:20 crc kubenswrapper[4775]: W0123 14:36:20.568058 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda43d5dd7_2b7b_4806_b358_976cf374cd43.slice/crio-5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed WatchSource:0}: Error finding container 5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed: Status 404 returned error can't find the container with id 5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.716569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.716963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722309 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722321 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.723632 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerStarted","Data":"5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.739338 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.7393227060000003 podStartE2EDuration="2.739322706s" podCreationTimestamp="2026-01-23 14:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.737218069 +0000 UTC m=+1927.732046809" watchObservedRunningTime="2026-01-23 14:36:20.739322706 +0000 UTC m=+1927.734151446" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.760457 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.760443805 podStartE2EDuration="1.760443805s" podCreationTimestamp="2026-01-23 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.752735367 +0000 UTC m=+1927.747564117" watchObservedRunningTime="2026-01-23 14:36:20.760443805 +0000 UTC m=+1927.755272545" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.780761 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.780747182 podStartE2EDuration="1.780747182s" podCreationTimestamp="2026-01-23 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.772482329 +0000 UTC m=+1927.767311069" watchObservedRunningTime="2026-01-23 14:36:20.780747182 +0000 UTC m=+1927.775575932" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.951037 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.961356 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.731404 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" path="/var/lib/kubelet/pods/e2f66b57-925b-4d68-9917-77fded405cfd/volumes" Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.740470 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerStarted","Data":"37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87"} Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.751830 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.175339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.752707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.754161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.754268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.757160 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.758379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844486 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844526 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.946918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.946994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.947035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.955743 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.968188 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.976936 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.093116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.455426 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.455534 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.561243 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:24 crc kubenswrapper[4775]: W0123 14:36:24.566293 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef19dc5_1d78_479c_8220_340c46c44bdf.slice/crio-81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac WatchSource:0}: Error finding container 81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac: Status 404 returned error can't find the container with id 81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.784435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerStarted","Data":"7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9"} Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.785090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerStarted","Data":"81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac"} Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.804219 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" podStartSLOduration=1.8042027059999999 podStartE2EDuration="1.804202706s" podCreationTimestamp="2026-01-23 14:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:24.802521291 +0000 UTC m=+1931.797350041" watchObservedRunningTime="2026-01-23 14:36:24.804202706 +0000 UTC m=+1931.799031456" Jan 23 14:36:25 crc kubenswrapper[4775]: I0123 14:36:25.104424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.068749 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.069441 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.454927 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.455394 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.838297 4775 generic.go:334] "Generic (PLEG): container finished" podID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerID="7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9" exitCode=0 Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.838384 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerDied","Data":"7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9"} Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.105157 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.150099 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.150338 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.151174 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.537080 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.227:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.537072 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.227:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.934118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.275340 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385608 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385691 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.391115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts" (OuterVolumeSpecName: "scripts") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.394128 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw" (OuterVolumeSpecName: "kube-api-access-h8tsw") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "kube-api-access-h8tsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.410649 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data" (OuterVolumeSpecName: "config-data") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487670 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487707 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487716 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866874 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerDied","Data":"81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac"} Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866919 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866944 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.155840 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.156144 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" containerID="cri-o://cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.156178 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" containerID="cri-o://00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.171866 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.234609 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.235154 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" containerID="cri-o://a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.235274 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.892988 4775 generic.go:334] "Generic (PLEG): container finished" podID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" exitCode=143 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.893127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.896791 4775 generic.go:334] "Generic (PLEG): container finished" podID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" exitCode=143 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.896865 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.897109 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" gracePeriod=30 Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.106713 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.110997 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.113416 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.113473 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.816797 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.827173 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934362 4775 generic.go:334] "Generic (PLEG): container finished" podID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" exitCode=0 Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934507 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934723 4775 scope.go:117] "RemoveContainer" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934933 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937663 4775 generic.go:334] "Generic (PLEG): container finished" podID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" exitCode=0 Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937693 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937717 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937767 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.956890 4775 scope.go:117] "RemoveContainer" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972329 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972387 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972455 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972531 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972584 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.973053 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs" (OuterVolumeSpecName: "logs") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.973372 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs" (OuterVolumeSpecName: "logs") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.978021 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7" (OuterVolumeSpecName: "kube-api-access-tptv7") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "kube-api-access-tptv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.978119 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4" (OuterVolumeSpecName: "kube-api-access-xqng4") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "kube-api-access-xqng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.981659 4775 scope.go:117] "RemoveContainer" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.982153 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": container with ID starting with b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818 not found: ID does not exist" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982191 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} err="failed to get container status \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": rpc error: code = NotFound desc = could not find container \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": container with ID starting with b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818 not found: ID does not exist" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982217 4775 scope.go:117] "RemoveContainer" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.982621 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": container with ID starting with a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920 not found: ID does not exist" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982657 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} err="failed to get container status \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": rpc error: code = NotFound desc = could not find container \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": container with ID starting with a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920 not found: ID does not exist" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982678 4775 scope.go:117] "RemoveContainer" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.993134 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data" (OuterVolumeSpecName: "config-data") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.004552 4775 scope.go:117] "RemoveContainer" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.012987 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data" (OuterVolumeSpecName: "config-data") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023022 4775 scope.go:117] "RemoveContainer" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.023396 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": container with ID starting with 00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325 not found: ID does not exist" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023434 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} err="failed to get container status \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": rpc error: code = NotFound desc = could not find container \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": container with ID starting with 00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325 not found: ID does not exist" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023459 4775 scope.go:117] "RemoveContainer" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.023792 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": container with ID starting with cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00 not found: ID does not exist" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023847 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} err="failed to get container status \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": rpc error: code = NotFound desc = could not find container \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": container with ID starting with cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00 not found: ID does not exist" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074465 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074516 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074537 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074554 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074577 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074595 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.291424 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.316573 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329388 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329892 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329920 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329950 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329958 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329975 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329984 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.330004 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330012 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.330022 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330030 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330208 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330257 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330276 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330292 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330311 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.333015 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.336480 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.368634 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.386932 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.394655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.408119 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.410720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.417025 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.418562 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490130 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490196 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592838 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592903 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592950 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593012 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593047 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593082 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593626 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.598368 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.622426 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695411 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695964 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.701434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.716453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.745498 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.753134 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.954420 4775 generic.go:334] "Generic (PLEG): container finished" podID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" exitCode=0 Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.954488 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerDied","Data":"37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.017472 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.102603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"a43d5dd7-2b7b-4806-b358-976cf374cd43\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.102675 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"a43d5dd7-2b7b-4806-b358-976cf374cd43\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.108298 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz" (OuterVolumeSpecName: "kube-api-access-qsvjz") pod "a43d5dd7-2b7b-4806-b358-976cf374cd43" (UID: "a43d5dd7-2b7b-4806-b358-976cf374cd43"). InnerVolumeSpecName "kube-api-access-qsvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.124600 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data" (OuterVolumeSpecName: "config-data") pod "a43d5dd7-2b7b-4806-b358-976cf374cd43" (UID: "a43d5dd7-2b7b-4806-b358-976cf374cd43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.203881 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.203909 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.240238 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:37 crc kubenswrapper[4775]: W0123 14:36:37.248091 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d0a843_11de_43a6_9c92_6a65a6d406ec.slice/crio-abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d WatchSource:0}: Error finding container abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d: Status 404 returned error can't find the container with id abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.325286 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:37 crc kubenswrapper[4775]: W0123 14:36:37.353034 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56066bf2_4408_46e5_8df0_6ce62447bf2a.slice/crio-5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152 WatchSource:0}: Error finding container 5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152: Status 404 returned error can't find the container with id 5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152 Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.723700 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" path="/var/lib/kubelet/pods/5105347b-2714-4def-a8e9-8f2e72aa6a0e/volumes" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.726192 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" path="/var/lib/kubelet/pods/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c/volumes" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"e49293d26a1d43b1b80a74401fb27db477b1288d691ee579a23570aa8c32f3bb"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967292 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"3ccd39a9c93904f3782dbc53718b2100dfaad158606e6a6bc627284ffe59845d"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969544 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerDied","Data":"5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969574 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969618 4775 scope.go:117] "RemoveContainer" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"d22b7410bec1d7df615f28412d9ac6a6d8a1918ed885d07df4ab369c8e80dddf"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"a10fcd98beba1aa98e14b18f127100ea010a9b336b358c4d59126c39e3ef3c78"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152"} Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.016029 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.016008286 podStartE2EDuration="2.016008286s" podCreationTimestamp="2026-01-23 14:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:37.992093982 +0000 UTC m=+1944.986922752" watchObservedRunningTime="2026-01-23 14:36:38.016008286 +0000 UTC m=+1945.010837046" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.020691 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.020671805 podStartE2EDuration="2.020671805s" podCreationTimestamp="2026-01-23 14:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:38.012109157 +0000 UTC m=+1945.006937917" watchObservedRunningTime="2026-01-23 14:36:38.020671805 +0000 UTC m=+1945.015500565" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.032838 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.056265 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.063367 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: E0123 14:36:38.063800 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.063841 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.064062 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.064683 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.067564 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.070119 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.221167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.221297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.322953 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.323070 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.327717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.338859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.392519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.720734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: W0123 14:36:38.724962 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfa6b38_3f0a_4f8e_9bd4_ec3907a919f0.slice/crio-ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca WatchSource:0}: Error finding container ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca: Status 404 returned error can't find the container with id ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.986182 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0","Type":"ContainerStarted","Data":"0624c3c816b60799e510395c36653db02ae4c7a578578b24e14fa96b3ff92dec"} Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.986646 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0","Type":"ContainerStarted","Data":"ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca"} Jan 23 14:36:39 crc kubenswrapper[4775]: I0123 14:36:39.012854 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.012797291 podStartE2EDuration="1.012797291s" podCreationTimestamp="2026-01-23 14:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:39.003709218 +0000 UTC m=+1945.998537948" watchObservedRunningTime="2026-01-23 14:36:39.012797291 +0000 UTC m=+1946.007626081" Jan 23 14:36:39 crc kubenswrapper[4775]: I0123 14:36:39.730232 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" path="/var/lib/kubelet/pods/a43d5dd7-2b7b-4806-b358-976cf374cd43/volumes" Jan 23 14:36:41 crc kubenswrapper[4775]: I0123 14:36:41.746510 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:41 crc kubenswrapper[4775]: I0123 14:36:41.746599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:43 crc kubenswrapper[4775]: I0123 14:36:43.393138 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.746453 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.747267 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.753969 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.754033 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.910976 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="56066bf2-4408-46e5-8df0-6ce62447bf2a" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.911043 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="56066bf2-4408-46e5-8df0-6ce62447bf2a" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.911048 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="72d0a843-11de-43a6-9c92-6a65a6d406ec" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.910969 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="72d0a843-11de-43a6-9c92-6a65a6d406ec" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:48 crc kubenswrapper[4775]: I0123 14:36:48.393746 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:48 crc kubenswrapper[4775]: I0123 14:36:48.448281 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:49 crc kubenswrapper[4775]: I0123 14:36:49.138929 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:53 crc kubenswrapper[4775]: I0123 14:36:53.219274 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:36:53 crc kubenswrapper[4775]: I0123 14:36:53.219891 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.750532 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.751016 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.753753 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.753962 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.759239 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.759822 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.763779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.767050 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:57 crc kubenswrapper[4775]: I0123 14:36:57.195427 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:57 crc kubenswrapper[4775]: I0123 14:36:57.199726 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.836594 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.838086 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.840367 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.840985 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.847854 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012366 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.021163 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.021618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.042386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.195345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.662317 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:37:00 crc kubenswrapper[4775]: I0123 14:37:00.225070 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4"} Jan 23 14:37:00 crc kubenswrapper[4775]: I0123 14:37:00.226473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"27562d541f20254a2f84db2c1a11a1410fb6f2f590a4c41036a86757dd88cf6b"} Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.269004 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" exitCode=2 Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.269611 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4"} Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.270147 4775 scope.go:117] "RemoveContainer" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" Jan 23 14:37:05 crc kubenswrapper[4775]: I0123 14:37:05.285410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25"} Jan 23 14:37:05 crc kubenswrapper[4775]: I0123 14:37:05.324223 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podStartSLOduration=7.324198795 podStartE2EDuration="7.324198795s" podCreationTimestamp="2026-01-23 14:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:37:00.249576435 +0000 UTC m=+1967.244405185" watchObservedRunningTime="2026-01-23 14:37:05.324198795 +0000 UTC m=+1972.319027565" Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.360548 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" exitCode=2 Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.360635 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25"} Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.362860 4775 scope.go:117] "RemoveContainer" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.363684 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:09 crc kubenswrapper[4775]: E0123 14:37:09.364359 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:21 crc kubenswrapper[4775]: I0123 14:37:21.714775 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:22 crc kubenswrapper[4775]: I0123 14:37:22.511730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9"} Jan 23 14:37:23 crc kubenswrapper[4775]: I0123 14:37:23.219182 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:37:23 crc kubenswrapper[4775]: I0123 14:37:23.219281 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.560654 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" exitCode=2 Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.560771 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9"} Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.561268 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.562455 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:26 crc kubenswrapper[4775]: E0123 14:37:26.563190 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:37 crc kubenswrapper[4775]: I0123 14:37:37.713749 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:37 crc kubenswrapper[4775]: E0123 14:37:37.714646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.533219 4775 scope.go:117] "RemoveContainer" containerID="827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.592611 4775 scope.go:117] "RemoveContainer" containerID="4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.640648 4775 scope.go:117] "RemoveContainer" containerID="552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.674569 4775 scope.go:117] "RemoveContainer" containerID="ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.718511 4775 scope.go:117] "RemoveContainer" containerID="46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.761878 4775 scope.go:117] "RemoveContainer" containerID="799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.799577 4775 scope.go:117] "RemoveContainer" containerID="36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.835683 4775 scope.go:117] "RemoveContainer" containerID="f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.889324 4775 scope.go:117] "RemoveContainer" containerID="33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.909087 4775 scope.go:117] "RemoveContainer" containerID="2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.969601 4775 scope.go:117] "RemoveContainer" containerID="6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.997574 4775 scope.go:117] "RemoveContainer" containerID="7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a" Jan 23 14:37:49 crc kubenswrapper[4775]: I0123 14:37:49.714259 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:50 crc kubenswrapper[4775]: I0123 14:37:50.871724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219015 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219089 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.220037 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.220109 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" gracePeriod=600 Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.916681 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" exitCode=0 Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917338 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.935632 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" exitCode=2 Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.935719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2"} Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.936193 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.936983 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:37:54 crc kubenswrapper[4775]: E0123 14:37:54.937338 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:07 crc kubenswrapper[4775]: I0123 14:38:07.715723 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:07 crc kubenswrapper[4775]: E0123 14:38:07.719357 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:20 crc kubenswrapper[4775]: I0123 14:38:20.714503 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:20 crc kubenswrapper[4775]: E0123 14:38:20.715485 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:35 crc kubenswrapper[4775]: I0123 14:38:35.716082 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:36 crc kubenswrapper[4775]: I0123 14:38:36.365938 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd"} Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.417729 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" exitCode=2 Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.417873 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd"} Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.418206 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.419251 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:38:40 crc kubenswrapper[4775]: E0123 14:38:40.419646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:48 crc kubenswrapper[4775]: I0123 14:38:48.282242 4775 scope.go:117] "RemoveContainer" containerID="af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.099308 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.101590 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.124827 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176909 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.278864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.278956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279041 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279472 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279523 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.307111 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.447484 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.765631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: W0123 14:38:51.769813 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8 WatchSource:0}: Error finding container 59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8: Status 404 returned error can't find the container with id 59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8 Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527305 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" exitCode=0 Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539"} Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8"} Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.529300 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:38:53 crc kubenswrapper[4775]: I0123 14:38:53.543018 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} Jan 23 14:38:53 crc kubenswrapper[4775]: I0123 14:38:53.724239 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:38:53 crc kubenswrapper[4775]: E0123 14:38:53.724454 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:54 crc kubenswrapper[4775]: I0123 14:38:54.560255 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" exitCode=0 Jan 23 14:38:54 crc kubenswrapper[4775]: I0123 14:38:54.560501 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} Jan 23 14:38:55 crc kubenswrapper[4775]: I0123 14:38:55.573289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} Jan 23 14:38:55 crc kubenswrapper[4775]: I0123 14:38:55.595339 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2jvq2" podStartSLOduration=2.141401147 podStartE2EDuration="4.595305123s" podCreationTimestamp="2026-01-23 14:38:51 +0000 UTC" firstStartedPulling="2026-01-23 14:38:52.529106514 +0000 UTC m=+2079.523935254" lastFinishedPulling="2026-01-23 14:38:54.98301045 +0000 UTC m=+2081.977839230" observedRunningTime="2026-01-23 14:38:55.591217973 +0000 UTC m=+2082.586046723" watchObservedRunningTime="2026-01-23 14:38:55.595305123 +0000 UTC m=+2082.590133903" Jan 23 14:39:01 crc kubenswrapper[4775]: I0123 14:39:01.448413 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:01 crc kubenswrapper[4775]: I0123 14:39:01.448977 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:02 crc kubenswrapper[4775]: I0123 14:39:02.495333 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2jvq2" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" probeResult="failure" output=< Jan 23 14:39:02 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:39:02 crc kubenswrapper[4775]: > Jan 23 14:39:04 crc kubenswrapper[4775]: I0123 14:39:04.714464 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:04 crc kubenswrapper[4775]: E0123 14:39:04.714963 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.517214 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.577591 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.784485 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:12 crc kubenswrapper[4775]: I0123 14:39:12.776462 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2jvq2" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" containerID="cri-o://055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" gracePeriod=2 Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.257797 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.303547 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.303953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.304020 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.325970 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities" (OuterVolumeSpecName: "utilities") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.336108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc" (OuterVolumeSpecName: "kube-api-access-kxqvc") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "kube-api-access-kxqvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.405822 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.405870 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.460668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.507623 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789097 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" exitCode=0 Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789173 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789237 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.790346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8"} Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.790372 4775 scope.go:117] "RemoveContainer" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.821110 4775 scope.go:117] "RemoveContainer" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.822219 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.829436 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.843737 4775 scope.go:117] "RemoveContainer" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.883962 4775 scope.go:117] "RemoveContainer" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884314 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": container with ID starting with 055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b not found: ID does not exist" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884354 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} err="failed to get container status \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": rpc error: code = NotFound desc = could not find container \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": container with ID starting with 055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b not found: ID does not exist" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884378 4775 scope.go:117] "RemoveContainer" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884626 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": container with ID starting with 80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad not found: ID does not exist" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884654 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} err="failed to get container status \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": rpc error: code = NotFound desc = could not find container \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": container with ID starting with 80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad not found: ID does not exist" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884676 4775 scope.go:117] "RemoveContainer" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884956 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": container with ID starting with 15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539 not found: ID does not exist" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539"} err="failed to get container status \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": rpc error: code = NotFound desc = could not find container \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": container with ID starting with 15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539 not found: ID does not exist" Jan 23 14:39:15 crc kubenswrapper[4775]: I0123 14:39:15.730147 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" path="/var/lib/kubelet/pods/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd/volumes" Jan 23 14:39:17 crc kubenswrapper[4775]: I0123 14:39:17.715295 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:17 crc kubenswrapper[4775]: E0123 14:39:17.716312 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:21 crc kubenswrapper[4775]: E0123 14:39:21.076549 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:30 crc kubenswrapper[4775]: I0123 14:39:30.714291 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:30 crc kubenswrapper[4775]: E0123 14:39:30.715374 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:31 crc kubenswrapper[4775]: E0123 14:39:31.294204 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:41 crc kubenswrapper[4775]: E0123 14:39:41.493218 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:42 crc kubenswrapper[4775]: I0123 14:39:42.713979 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:42 crc kubenswrapper[4775]: E0123 14:39:42.714655 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.408847 4775 scope.go:117] "RemoveContainer" containerID="5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.446185 4775 scope.go:117] "RemoveContainer" containerID="8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.498742 4775 scope.go:117] "RemoveContainer" containerID="61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.532377 4775 scope.go:117] "RemoveContainer" containerID="f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.569698 4775 scope.go:117] "RemoveContainer" containerID="edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.611673 4775 scope.go:117] "RemoveContainer" containerID="7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.645123 4775 scope.go:117] "RemoveContainer" containerID="d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.671412 4775 scope.go:117] "RemoveContainer" containerID="f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.692580 4775 scope.go:117] "RemoveContainer" containerID="de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94" Jan 23 14:39:51 crc kubenswrapper[4775]: E0123 14:39:51.760655 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.218590 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.219022 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.722447 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:53 crc kubenswrapper[4775]: E0123 14:39:53.722856 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:01 crc kubenswrapper[4775]: E0123 14:40:01.948436 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:04 crc kubenswrapper[4775]: I0123 14:40:04.714480 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:40:05 crc kubenswrapper[4775]: I0123 14:40:05.350871 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.394830 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" exitCode=2 Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.394902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.395245 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.395650 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:09 crc kubenswrapper[4775]: E0123 14:40:09.395891 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:12 crc kubenswrapper[4775]: E0123 14:40:12.224278 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.219415 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.220247 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.725242 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:23 crc kubenswrapper[4775]: E0123 14:40:23.725603 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:34 crc kubenswrapper[4775]: I0123 14:40:34.714154 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:34 crc kubenswrapper[4775]: E0123 14:40:34.714923 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:45 crc kubenswrapper[4775]: I0123 14:40:45.714285 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:45 crc kubenswrapper[4775]: E0123 14:40:45.715002 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.903117 4775 scope.go:117] "RemoveContainer" containerID="ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.934512 4775 scope.go:117] "RemoveContainer" containerID="2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.970541 4775 scope.go:117] "RemoveContainer" containerID="8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f" Jan 23 14:40:49 crc kubenswrapper[4775]: I0123 14:40:49.018016 4775 scope.go:117] "RemoveContainer" containerID="3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f" Jan 23 14:40:49 crc kubenswrapper[4775]: I0123 14:40:49.068523 4775 scope.go:117] "RemoveContainer" containerID="6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.218972 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.219431 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.219494 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.220500 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.220603 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" gracePeriod=600 Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.275044 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fea0767_0566_4214_855d_ed0373946271.slice/crio-conmon-607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.343349 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.854951 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" exitCode=0 Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855117 4775 scope.go:117] "RemoveContainer" containerID="d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855996 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.856409 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:40:57 crc kubenswrapper[4775]: I0123 14:40:57.713650 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:57 crc kubenswrapper[4775]: E0123 14:40:57.714410 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:04 crc kubenswrapper[4775]: I0123 14:41:04.713966 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:04 crc kubenswrapper[4775]: E0123 14:41:04.715112 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:10 crc kubenswrapper[4775]: I0123 14:41:10.713627 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:10 crc kubenswrapper[4775]: E0123 14:41:10.714691 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:15 crc kubenswrapper[4775]: I0123 14:41:15.714574 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:15 crc kubenswrapper[4775]: E0123 14:41:15.715879 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:25 crc kubenswrapper[4775]: I0123 14:41:25.714636 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:25 crc kubenswrapper[4775]: E0123 14:41:25.716133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:27 crc kubenswrapper[4775]: I0123 14:41:27.714496 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:27 crc kubenswrapper[4775]: E0123 14:41:27.715010 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:37 crc kubenswrapper[4775]: I0123 14:41:37.714723 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:37 crc kubenswrapper[4775]: E0123 14:41:37.715682 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:40 crc kubenswrapper[4775]: I0123 14:41:40.715058 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:40 crc kubenswrapper[4775]: E0123 14:41:40.715773 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.219409 4775 scope.go:117] "RemoveContainer" containerID="dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.260736 4775 scope.go:117] "RemoveContainer" containerID="c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.311958 4775 scope.go:117] "RemoveContainer" containerID="5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b" Jan 23 14:41:50 crc kubenswrapper[4775]: I0123 14:41:50.714225 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:50 crc kubenswrapper[4775]: E0123 14:41:50.715035 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:53 crc kubenswrapper[4775]: I0123 14:41:53.723510 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:53 crc kubenswrapper[4775]: E0123 14:41:53.724303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:04 crc kubenswrapper[4775]: I0123 14:42:04.714948 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:04 crc kubenswrapper[4775]: E0123 14:42:04.717712 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:04 crc kubenswrapper[4775]: I0123 14:42:04.921989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:42:07 crc kubenswrapper[4775]: I0123 14:42:07.713919 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:07 crc kubenswrapper[4775]: E0123 14:42:07.715254 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:08 crc kubenswrapper[4775]: I0123 14:42:08.742444 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:42:09 crc kubenswrapper[4775]: I0123 14:42:09.308156 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-31e4-account-create-update-2rd2s_95df8848-8035-4302-9689-db060f7d4148/mariadb-account-create-update/0.log" Jan 23 14:42:09 crc kubenswrapper[4775]: I0123 14:42:09.852465 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-bfq79_98b564d3-5399-47b6-9397-4c3b006f9e13/mariadb-database-create/0.log" Jan 23 14:42:10 crc kubenswrapper[4775]: I0123 14:42:10.419153 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-d8kgs_603674a6-1055-4e27-b370-2b57865ebc55/mariadb-database-create/0.log" Jan 23 14:42:10 crc kubenswrapper[4775]: I0123 14:42:10.919493 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-f1e1-account-create-update-8ng7h_48eb2aff-1769-415f-b284-8d0cbf32a4e9/mariadb-account-create-update/0.log" Jan 23 14:42:11 crc kubenswrapper[4775]: I0123 14:42:11.409598 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-574a-account-create-update-mjhg8_15c2fb30-3be5-4e47-b2d3-8fbd54665494/mariadb-account-create-update/0.log" Jan 23 14:42:11 crc kubenswrapper[4775]: I0123 14:42:11.934411 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-82jzj_891c1a15-7b44-4c8f-be11-d06333a1d0d1/mariadb-database-create/0.log" Jan 23 14:42:12 crc kubenswrapper[4775]: I0123 14:42:12.561415 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:42:13 crc kubenswrapper[4775]: I0123 14:42:13.070275 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-qxjlc_a194a858-8c18-41e1-9a10-428397753ece/nova-manage/0.log" Jan 23 14:42:13 crc kubenswrapper[4775]: I0123 14:42:13.611496 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:42:14 crc kubenswrapper[4775]: I0123 14:42:14.077050 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-2l6n8_12f70e17-ec31-43fc-ac56-d1742f962de5/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 23 14:42:14 crc kubenswrapper[4775]: I0123 14:42:14.628492 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-w7tbz_9e8f7bb4-6671-4ef8-b35a-45059af73b01/nova-manage/5.log" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.173831 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4gfb8_3ef19dc5-1d78-479c-8220-340c46c44bdf/nova-manage/0.log" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.714543 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:15 crc kubenswrapper[4775]: E0123 14:42:15.714929 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.795390 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:42:16 crc kubenswrapper[4775]: I0123 14:42:16.375332 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-sjz5r_263d2fcc-c533-4291-8e78-d8e9a2ee2894/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 23 14:42:16 crc kubenswrapper[4775]: I0123 14:42:16.910624 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:42:17 crc kubenswrapper[4775]: I0123 14:42:17.527258 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:42:18 crc kubenswrapper[4775]: I0123 14:42:18.137253 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:42:18 crc kubenswrapper[4775]: I0123 14:42:18.682082 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:42:19 crc kubenswrapper[4775]: I0123 14:42:19.297382 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:42:19 crc kubenswrapper[4775]: I0123 14:42:19.882174 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:42:20 crc kubenswrapper[4775]: I0123 14:42:20.523307 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:42:20 crc kubenswrapper[4775]: I0123 14:42:20.714272 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:20 crc kubenswrapper[4775]: E0123 14:42:20.714662 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:21 crc kubenswrapper[4775]: I0123 14:42:21.025010 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:42:21 crc kubenswrapper[4775]: I0123 14:42:21.575746 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:42:22 crc kubenswrapper[4775]: I0123 14:42:22.183941 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:42:28 crc kubenswrapper[4775]: I0123 14:42:28.713779 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:28 crc kubenswrapper[4775]: E0123 14:42:28.714970 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:35 crc kubenswrapper[4775]: I0123 14:42:35.714314 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:35 crc kubenswrapper[4775]: E0123 14:42:35.715385 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:42 crc kubenswrapper[4775]: I0123 14:42:42.715112 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:42 crc kubenswrapper[4775]: E0123 14:42:42.716079 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:48 crc kubenswrapper[4775]: I0123 14:42:48.714050 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:48 crc kubenswrapper[4775]: E0123 14:42:48.714792 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:55 crc kubenswrapper[4775]: I0123 14:42:55.714504 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:56 crc kubenswrapper[4775]: I0123 14:42:56.164147 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} Jan 23 14:42:57 crc kubenswrapper[4775]: I0123 14:42:57.212275 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:42:57 crc kubenswrapper[4775]: I0123 14:42:57.213230 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" containerID="cri-o://bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" gracePeriod=30 Jan 23 14:42:58 crc kubenswrapper[4775]: I0123 14:42:58.379436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:42:58 crc kubenswrapper[4775]: I0123 14:42:58.917197 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:42:59 crc kubenswrapper[4775]: I0123 14:42:59.440024 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:42:59 crc kubenswrapper[4775]: I0123 14:42:59.928527 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.422128 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.845166 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.904284 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.981553 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.981859 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.982795 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.988538 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495" (OuterVolumeSpecName: "kube-api-access-rn495") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "kube-api-access-rn495". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.990024 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts" (OuterVolumeSpecName: "scripts") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.012749 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data" (OuterVolumeSpecName: "config-data") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085349 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085387 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085409 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221591 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" exitCode=2 Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221685 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"27562d541f20254a2f84db2c1a11a1410fb6f2f590a4c41036a86757dd88cf6b"} Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221713 4775 scope.go:117] "RemoveContainer" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221719 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.256296 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.286837 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.293701 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.335581 4775 scope.go:117] "RemoveContainer" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: E0123 14:43:01.337191 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": container with ID starting with bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e not found: ID does not exist" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.337231 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} err="failed to get container status \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": rpc error: code = NotFound desc = could not find container \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": container with ID starting with bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e not found: ID does not exist" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.337254 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: E0123 14:43:01.339111 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": container with ID starting with 4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5 not found: ID does not exist" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.339139 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} err="failed to get container status \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": rpc error: code = NotFound desc = could not find container \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": container with ID starting with 4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5 not found: ID does not exist" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.435244 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.729929 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" path="/var/lib/kubelet/pods/9e8f7bb4-6671-4ef8-b35a-45059af73b01/volumes" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.950939 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:43:02 crc kubenswrapper[4775]: I0123 14:43:02.662620 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:43:02 crc kubenswrapper[4775]: I0123 14:43:02.714494 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:02 crc kubenswrapper[4775]: E0123 14:43:02.717774 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:03 crc kubenswrapper[4775]: I0123 14:43:03.172729 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:43:03 crc kubenswrapper[4775]: I0123 14:43:03.756113 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:43:04 crc kubenswrapper[4775]: I0123 14:43:04.216868 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:43:04 crc kubenswrapper[4775]: I0123 14:43:04.770079 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:43:05 crc kubenswrapper[4775]: I0123 14:43:05.252583 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:43:06 crc kubenswrapper[4775]: I0123 14:43:06.267336 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:43:06 crc kubenswrapper[4775]: I0123 14:43:06.745787 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:43:07 crc kubenswrapper[4775]: I0123 14:43:07.260092 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:43:07 crc kubenswrapper[4775]: I0123 14:43:07.797715 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:43:08 crc kubenswrapper[4775]: I0123 14:43:08.714586 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:43:09 crc kubenswrapper[4775]: I0123 14:43:09.213393 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:43:09 crc kubenswrapper[4775]: I0123 14:43:09.745784 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:43:10 crc kubenswrapper[4775]: I0123 14:43:10.283109 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:43:10 crc kubenswrapper[4775]: I0123 14:43:10.817946 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:43:11 crc kubenswrapper[4775]: I0123 14:43:11.321688 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:43:11 crc kubenswrapper[4775]: I0123 14:43:11.841705 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:43:12 crc kubenswrapper[4775]: I0123 14:43:12.360248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:43:12 crc kubenswrapper[4775]: I0123 14:43:12.805376 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:43:14 crc kubenswrapper[4775]: I0123 14:43:14.714567 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:14 crc kubenswrapper[4775]: E0123 14:43:14.715046 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:18 crc kubenswrapper[4775]: I0123 14:43:18.014784 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:43:22 crc kubenswrapper[4775]: I0123 14:43:22.180735 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:43:22 crc kubenswrapper[4775]: I0123 14:43:22.733477 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-31e4-account-create-update-2rd2s_95df8848-8035-4302-9689-db060f7d4148/mariadb-account-create-update/0.log" Jan 23 14:43:23 crc kubenswrapper[4775]: I0123 14:43:23.289907 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-bfq79_98b564d3-5399-47b6-9397-4c3b006f9e13/mariadb-database-create/0.log" Jan 23 14:43:23 crc kubenswrapper[4775]: I0123 14:43:23.814582 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-d8kgs_603674a6-1055-4e27-b370-2b57865ebc55/mariadb-database-create/0.log" Jan 23 14:43:24 crc kubenswrapper[4775]: I0123 14:43:24.287003 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-f1e1-account-create-update-8ng7h_48eb2aff-1769-415f-b284-8d0cbf32a4e9/mariadb-account-create-update/0.log" Jan 23 14:43:24 crc kubenswrapper[4775]: I0123 14:43:24.809417 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-574a-account-create-update-mjhg8_15c2fb30-3be5-4e47-b2d3-8fbd54665494/mariadb-account-create-update/0.log" Jan 23 14:43:25 crc kubenswrapper[4775]: I0123 14:43:25.270067 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-82jzj_891c1a15-7b44-4c8f-be11-d06333a1d0d1/mariadb-database-create/0.log" Jan 23 14:43:25 crc kubenswrapper[4775]: I0123 14:43:25.866345 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:43:26 crc kubenswrapper[4775]: I0123 14:43:26.446186 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-qxjlc_a194a858-8c18-41e1-9a10-428397753ece/nova-manage/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.061201 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.640599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-2l6n8_12f70e17-ec31-43fc-ac56-d1742f962de5/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.715068 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:27 crc kubenswrapper[4775]: E0123 14:43:27.716097 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:28 crc kubenswrapper[4775]: I0123 14:43:28.202110 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4gfb8_3ef19dc5-1d78-479c-8220-340c46c44bdf/nova-manage/0.log" Jan 23 14:43:28 crc kubenswrapper[4775]: I0123 14:43:28.765175 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:43:29 crc kubenswrapper[4775]: I0123 14:43:29.402795 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-sjz5r_263d2fcc-c533-4291-8e78-d8e9a2ee2894/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 23 14:43:30 crc kubenswrapper[4775]: I0123 14:43:30.019879 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:43:30 crc kubenswrapper[4775]: I0123 14:43:30.602320 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:43:31 crc kubenswrapper[4775]: I0123 14:43:31.101733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:43:31 crc kubenswrapper[4775]: I0123 14:43:31.577232 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:43:32 crc kubenswrapper[4775]: I0123 14:43:32.117733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:43:32 crc kubenswrapper[4775]: I0123 14:43:32.581680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:43:33 crc kubenswrapper[4775]: I0123 14:43:33.139277 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:43:33 crc kubenswrapper[4775]: I0123 14:43:33.775462 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:43:34 crc kubenswrapper[4775]: I0123 14:43:34.352103 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:43:34 crc kubenswrapper[4775]: I0123 14:43:34.870458 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:43:41 crc kubenswrapper[4775]: I0123 14:43:41.714226 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:41 crc kubenswrapper[4775]: E0123 14:43:41.715545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:55 crc kubenswrapper[4775]: I0123 14:43:55.714354 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:55 crc kubenswrapper[4775]: E0123 14:43:55.715488 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:08 crc kubenswrapper[4775]: I0123 14:44:08.713970 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:08 crc kubenswrapper[4775]: E0123 14:44:08.714823 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:10 crc kubenswrapper[4775]: I0123 14:44:10.539636 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:44:11 crc kubenswrapper[4775]: I0123 14:44:11.071171 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:44:11 crc kubenswrapper[4775]: I0123 14:44:11.613485 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:44:12 crc kubenswrapper[4775]: I0123 14:44:12.088986 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:44:12 crc kubenswrapper[4775]: I0123 14:44:12.577292 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.025471 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.503669 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.961423 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:44:14 crc kubenswrapper[4775]: I0123 14:44:14.653081 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:44:15 crc kubenswrapper[4775]: I0123 14:44:15.194420 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:44:15 crc kubenswrapper[4775]: I0123 14:44:15.825071 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:44:16 crc kubenswrapper[4775]: I0123 14:44:16.415133 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:44:17 crc kubenswrapper[4775]: I0123 14:44:17.023505 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:44:17 crc kubenswrapper[4775]: I0123 14:44:17.566137 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:44:18 crc kubenswrapper[4775]: I0123 14:44:18.674511 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:44:19 crc kubenswrapper[4775]: I0123 14:44:19.136871 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:44:19 crc kubenswrapper[4775]: I0123 14:44:19.556791 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:44:20 crc kubenswrapper[4775]: I0123 14:44:20.072463 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:44:21 crc kubenswrapper[4775]: I0123 14:44:21.022254 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:44:21 crc kubenswrapper[4775]: I0123 14:44:21.498174 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.045737 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.561163 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.714294 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:22 crc kubenswrapper[4775]: E0123 14:44:22.714726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:23 crc kubenswrapper[4775]: I0123 14:44:23.071517 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:44:23 crc kubenswrapper[4775]: I0123 14:44:23.545165 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.000954 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.463260 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.971495 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:44:33 crc kubenswrapper[4775]: I0123 14:44:33.724448 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:33 crc kubenswrapper[4775]: E0123 14:44:33.725454 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:44 crc kubenswrapper[4775]: I0123 14:44:44.714898 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:44 crc kubenswrapper[4775]: E0123 14:44:44.716232 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.504175 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505192 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505212 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505225 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505233 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505257 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-utilities" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505267 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-utilities" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505278 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505285 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505300 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505308 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505317 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-content" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505324 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-content" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505337 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505343 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505357 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505531 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505542 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505554 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505564 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505578 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505586 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505771 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505783 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505820 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506022 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506036 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506738 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.508687 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6vw8s"/"kube-root-ca.crt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.511106 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6vw8s"/"openshift-service-ca.crt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.528948 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.571688 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.571755 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.672956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.673282 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.673375 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.697577 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.822789 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.283816 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.287570 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.352748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"99e1c29464fe9bcd176edc66d996f75925c52d9eb2cd0ca1823f89a4e8988e3b"} Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.713517 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:57 crc kubenswrapper[4775]: E0123 14:44:57.713819 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.153583 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.156265 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.159843 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.160318 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.164564 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229551 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229593 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331563 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.332784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.338938 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.349307 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.480190 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.357391 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.408113 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerStarted","Data":"5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e"} Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.410458 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.433402 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" podStartSLOduration=1.791145347 podStartE2EDuration="8.433383001s" podCreationTimestamp="2026-01-23 14:44:56 +0000 UTC" firstStartedPulling="2026-01-23 14:44:57.287538787 +0000 UTC m=+2444.282367527" lastFinishedPulling="2026-01-23 14:45:03.929776431 +0000 UTC m=+2450.924605181" observedRunningTime="2026-01-23 14:45:04.425474619 +0000 UTC m=+2451.420303369" watchObservedRunningTime="2026-01-23 14:45:04.433383001 +0000 UTC m=+2451.428211751" Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.423295 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47"} Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.426225 4775 generic.go:334] "Generic (PLEG): container finished" podID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerID="e0b9a370a20bb36d5e9d3347ae55e9688b7c1a75755503e572a5a4c809dc9026" exitCode=0 Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.426312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerDied","Data":"e0b9a370a20bb36d5e9d3347ae55e9688b7c1a75755503e572a5a4c809dc9026"} Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.816987 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961571 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961627 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961686 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.962491 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume" (OuterVolumeSpecName: "config-volume") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.970007 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.980375 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw" (OuterVolumeSpecName: "kube-api-access-c4wzw") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "kube-api-access-c4wzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063667 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063704 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063715 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerDied","Data":"5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e"} Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442841 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442846 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.894502 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.899247 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:45:09 crc kubenswrapper[4775]: I0123 14:45:09.723063 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" path="/var/lib/kubelet/pods/2d6b6f17-bb56-49ba-8487-6e07346780a1/volumes" Jan 23 14:45:11 crc kubenswrapper[4775]: I0123 14:45:11.713630 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:11 crc kubenswrapper[4775]: E0123 14:45:11.713872 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.593725 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:18 crc kubenswrapper[4775]: E0123 14:45:18.594281 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.594293 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.594422 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.595491 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.612511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.628654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.628934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.629087 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730607 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730709 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730795 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.731564 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.734829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.750477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.916199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.187719 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:19 crc kubenswrapper[4775]: E0123 14:45:19.502586 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721aa0ee_a7d9_4b8c_abb6_d0d6bcf2d4e8.slice/crio-conmon-a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721aa0ee_a7d9_4b8c_abb6_d0d6bcf2d4e8.slice/crio-a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.522772 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" exitCode=0 Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.522845 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc"} Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.523111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"1876587843ba1a3129234b73c3573e75d8bdb5bd737183e524a0c5243824c914"} Jan 23 14:45:20 crc kubenswrapper[4775]: I0123 14:45:20.534735 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.546794 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" exitCode=0 Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.546922 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.628855 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.630768 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.646048 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.677983 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.678053 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.678127 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779173 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779240 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779629 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.792901 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.794733 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.802842 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.812308 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.880995 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.881086 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.881131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.972774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982745 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982845 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.983429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.983474 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.002464 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.140895 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.263289 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555121 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" exitCode=0 Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555231 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerStarted","Data":"3994761e9ce661e4a34641df7fae6257581617303ab3ad370a636bae72fa58e7"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.560904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.600196 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b72df" podStartSLOduration=2.102605028 podStartE2EDuration="4.600176317s" podCreationTimestamp="2026-01-23 14:45:18 +0000 UTC" firstStartedPulling="2026-01-23 14:45:19.5249901 +0000 UTC m=+2466.519818850" lastFinishedPulling="2026-01-23 14:45:22.022561399 +0000 UTC m=+2469.017390139" observedRunningTime="2026-01-23 14:45:22.595236808 +0000 UTC m=+2469.590065558" watchObservedRunningTime="2026-01-23 14:45:22.600176317 +0000 UTC m=+2469.595005057" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.713580 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:22 crc kubenswrapper[4775]: E0123 14:45:22.713824 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:22 crc kubenswrapper[4775]: W0123 14:45:22.744185 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e2681_58a7_4050_9dd0_3b0d77bdde6c.slice/crio-22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7 WatchSource:0}: Error finding container 22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7: Status 404 returned error can't find the container with id 22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7 Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.744655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569087 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" exitCode=0 Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b"} Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7"} Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.580622 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" exitCode=0 Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.580681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123"} Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.584542 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} Jan 23 14:45:25 crc kubenswrapper[4775]: I0123 14:45:25.594740 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" exitCode=0 Jan 23 14:45:25 crc kubenswrapper[4775]: I0123 14:45:25.594792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} Jan 23 14:45:26 crc kubenswrapper[4775]: I0123 14:45:26.605678 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerStarted","Data":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} Jan 23 14:45:26 crc kubenswrapper[4775]: I0123 14:45:26.634448 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq9jn" podStartSLOduration=2.743281212 podStartE2EDuration="5.634432652s" podCreationTimestamp="2026-01-23 14:45:21 +0000 UTC" firstStartedPulling="2026-01-23 14:45:22.557259431 +0000 UTC m=+2469.552088171" lastFinishedPulling="2026-01-23 14:45:25.448410831 +0000 UTC m=+2472.443239611" observedRunningTime="2026-01-23 14:45:26.632709454 +0000 UTC m=+2473.627538214" watchObservedRunningTime="2026-01-23 14:45:26.634432652 +0000 UTC m=+2473.629261392" Jan 23 14:45:27 crc kubenswrapper[4775]: I0123 14:45:27.618991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} Jan 23 14:45:27 crc kubenswrapper[4775]: I0123 14:45:27.641616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dx9cv" podStartSLOduration=3.605961506 podStartE2EDuration="6.641589132s" podCreationTimestamp="2026-01-23 14:45:21 +0000 UTC" firstStartedPulling="2026-01-23 14:45:23.570332547 +0000 UTC m=+2470.565161287" lastFinishedPulling="2026-01-23 14:45:26.605960173 +0000 UTC m=+2473.600788913" observedRunningTime="2026-01-23 14:45:27.638249878 +0000 UTC m=+2474.633078648" watchObservedRunningTime="2026-01-23 14:45:27.641589132 +0000 UTC m=+2474.636417902" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.916541 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.916861 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.965329 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:29 crc kubenswrapper[4775]: I0123 14:45:29.676175 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:30 crc kubenswrapper[4775]: I0123 14:45:30.989327 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.648561 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b72df" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" containerID="cri-o://382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" gracePeriod=2 Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.972958 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.983124 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.026445 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.053157 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.141614 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.141664 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.185444 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255053 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.256045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities" (OuterVolumeSpecName: "utilities") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.268123 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9" (OuterVolumeSpecName: "kube-api-access-brqv9") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "kube-api-access-brqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.281985 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357749 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357815 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664137 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" exitCode=0 Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664509 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664598 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664600 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"1876587843ba1a3129234b73c3573e75d8bdb5bd737183e524a0c5243824c914"} Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664624 4775 scope.go:117] "RemoveContainer" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.710516 4775 scope.go:117] "RemoveContainer" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.718527 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.725345 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.738839 4775 scope.go:117] "RemoveContainer" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.756733 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.761533 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.785557 4775 scope.go:117] "RemoveContainer" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.786231 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": container with ID starting with 382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466 not found: ID does not exist" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.786276 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} err="failed to get container status \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": rpc error: code = NotFound desc = could not find container \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": container with ID starting with 382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466 not found: ID does not exist" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.786310 4775 scope.go:117] "RemoveContainer" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.790034 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": container with ID starting with d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369 not found: ID does not exist" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.790101 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} err="failed to get container status \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": rpc error: code = NotFound desc = could not find container \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": container with ID starting with d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369 not found: ID does not exist" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.790138 4775 scope.go:117] "RemoveContainer" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.792006 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": container with ID starting with a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc not found: ID does not exist" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.792054 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc"} err="failed to get container status \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": rpc error: code = NotFound desc = could not find container \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": container with ID starting with a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc not found: ID does not exist" Jan 23 14:45:33 crc kubenswrapper[4775]: I0123 14:45:33.730041 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" path="/var/lib/kubelet/pods/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8/volumes" Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.714217 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:34 crc kubenswrapper[4775]: E0123 14:45:34.714577 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.785666 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.786027 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dx9cv" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" containerID="cri-o://344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" gracePeriod=2 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.196422 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.302886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303007 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303086 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303690 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities" (OuterVolumeSpecName: "utilities") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.308133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb" (OuterVolumeSpecName: "kube-api-access-lkfqb") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "kube-api-access-lkfqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.360260 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.380989 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404340 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404369 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404379 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695049 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" exitCode=0 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695164 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695217 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7"} Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695247 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq9jn" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" containerID="cri-o://b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" gracePeriod=2 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695253 4775 scope.go:117] "RemoveContainer" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.740038 4775 scope.go:117] "RemoveContainer" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.747616 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.747673 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.831449 4775 scope.go:117] "RemoveContainer" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.870542 4775 scope.go:117] "RemoveContainer" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.871265 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": container with ID starting with 344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54 not found: ID does not exist" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.871326 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} err="failed to get container status \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": rpc error: code = NotFound desc = could not find container \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": container with ID starting with 344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54 not found: ID does not exist" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.871346 4775 scope.go:117] "RemoveContainer" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.874615 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": container with ID starting with 5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e not found: ID does not exist" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.874656 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} err="failed to get container status \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": rpc error: code = NotFound desc = could not find container \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": container with ID starting with 5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e not found: ID does not exist" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.874677 4775 scope.go:117] "RemoveContainer" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.874986 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": container with ID starting with b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b not found: ID does not exist" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.875072 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b"} err="failed to get container status \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": rpc error: code = NotFound desc = could not find container \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": container with ID starting with b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.103143 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.215943 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.216065 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.216140 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.217225 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities" (OuterVolumeSpecName: "utilities") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.222981 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx" (OuterVolumeSpecName: "kube-api-access-94wnx") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "kube-api-access-94wnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.273071 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317618 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317648 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317661 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706650 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" exitCode=0 Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706724 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706743 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.707161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"3994761e9ce661e4a34641df7fae6257581617303ab3ad370a636bae72fa58e7"} Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.707186 4775 scope.go:117] "RemoveContainer" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.728406 4775 scope.go:117] "RemoveContainer" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.749106 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.760335 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.763718 4775 scope.go:117] "RemoveContainer" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.789871 4775 scope.go:117] "RemoveContainer" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.790384 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": container with ID starting with b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26 not found: ID does not exist" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.790442 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} err="failed to get container status \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": rpc error: code = NotFound desc = could not find container \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": container with ID starting with b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26 not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.790485 4775 scope.go:117] "RemoveContainer" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.791044 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": container with ID starting with 66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123 not found: ID does not exist" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791107 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123"} err="failed to get container status \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": rpc error: code = NotFound desc = could not find container \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": container with ID starting with 66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123 not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791141 4775 scope.go:117] "RemoveContainer" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.791707 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": container with ID starting with 4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea not found: ID does not exist" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791738 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea"} err="failed to get container status \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": rpc error: code = NotFound desc = could not find container \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": container with ID starting with 4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea not found: ID does not exist" Jan 23 14:45:37 crc kubenswrapper[4775]: I0123 14:45:37.729001 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" path="/var/lib/kubelet/pods/5820a548-636b-4a69-b8d6-b947ee11e3fd/volumes" Jan 23 14:45:37 crc kubenswrapper[4775]: I0123 14:45:37.730221 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" path="/var/lib/kubelet/pods/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c/volumes" Jan 23 14:45:45 crc kubenswrapper[4775]: I0123 14:45:45.714733 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:45 crc kubenswrapper[4775]: E0123 14:45:45.715730 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:49 crc kubenswrapper[4775]: I0123 14:45:49.533533 4775 scope.go:117] "RemoveContainer" containerID="bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd" Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.074661 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.080583 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.096238 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.106832 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.115169 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.120438 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.125388 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.130421 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.137331 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.147196 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.156871 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.163415 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.725600 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" path="/var/lib/kubelet/pods/15c2fb30-3be5-4e47-b2d3-8fbd54665494/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.727365 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" path="/var/lib/kubelet/pods/48eb2aff-1769-415f-b284-8d0cbf32a4e9/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.728458 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603674a6-1055-4e27-b370-2b57865ebc55" path="/var/lib/kubelet/pods/603674a6-1055-4e27-b370-2b57865ebc55/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.729609 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" path="/var/lib/kubelet/pods/891c1a15-7b44-4c8f-be11-d06333a1d0d1/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.731598 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95df8848-8035-4302-9689-db060f7d4148" path="/var/lib/kubelet/pods/95df8848-8035-4302-9689-db060f7d4148/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.732727 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" path="/var/lib/kubelet/pods/98b564d3-5399-47b6-9397-4c3b006f9e13/volumes" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.034526 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.050176 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.714309 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.746496 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" path="/var/lib/kubelet/pods/12f70e17-ec31-43fc-ac56-d1742f962de5/volumes" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.956593 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.037641 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.042686 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.048287 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.053021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.721983 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" path="/var/lib/kubelet/pods/263d2fcc-c533-4291-8e78-d8e9a2ee2894/volumes" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.722735 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a194a858-8c18-41e1-9a10-428397753ece" path="/var/lib/kubelet/pods/a194a858-8c18-41e1-9a10-428397753ece/volumes" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.832177 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.975785 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.978341 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.051543 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.191208 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.201034 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.218508 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.366051 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.524862 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.540626 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.545989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.712856 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.725623 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.762478 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.902384 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.928583 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.093908 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.103127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.271651 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.299469 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.464370 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.503204 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.693024 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.723415 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.899475 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.910297 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.075614 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.311696 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.353640 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.441744 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.625605 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.735203 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.764211 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.918209 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.975480 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.087164 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.125393 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.172396 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.321352 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.038168 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.046051 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.729358 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" path="/var/lib/kubelet/pods/3ef19dc5-1d78-479c-8220-340c46c44bdf/volumes" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.712573 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-psxgx_13e16abe-9325-4638-8b20-7195b7af8e68/control-plane-machine-set-operator/0.log" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.884062 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-svb79_85a9044b-9089-4a6a-87e6-06372c531aa9/kube-rbac-proxy/0.log" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.918074 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-svb79_85a9044b-9089-4a6a-87e6-06372c531aa9/machine-api-operator/0.log" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.680944 4775 scope.go:117] "RemoveContainer" containerID="3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.707939 4775 scope.go:117] "RemoveContainer" containerID="0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.757483 4775 scope.go:117] "RemoveContainer" containerID="fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.797086 4775 scope.go:117] "RemoveContainer" containerID="8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.844969 4775 scope.go:117] "RemoveContainer" containerID="f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.863612 4775 scope.go:117] "RemoveContainer" containerID="5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.891185 4775 scope.go:117] "RemoveContainer" containerID="7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.937965 4775 scope.go:117] "RemoveContainer" containerID="10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.984214 4775 scope.go:117] "RemoveContainer" containerID="9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71" Jan 23 14:46:50 crc kubenswrapper[4775]: I0123 14:46:50.010690 4775 scope.go:117] "RemoveContainer" containerID="60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7" Jan 23 14:46:56 crc kubenswrapper[4775]: I0123 14:46:56.934425 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-dzfhf_2a26d984-5abe-44ce-ad1e-25842b8f7e51/cert-manager-controller/0.log" Jan 23 14:46:57 crc kubenswrapper[4775]: I0123 14:46:57.059284 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-qsmln_620134d3-d230-4c5b-8aaf-4213bcba307c/cert-manager-cainjector/0.log" Jan 23 14:46:57 crc kubenswrapper[4775]: I0123 14:46:57.100358 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-w6lsn_3613a1b4-54b6-4a47-988a-a6624d530636/cert-manager-webhook/0.log" Jan 23 14:47:11 crc kubenswrapper[4775]: I0123 14:47:11.707223 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-w5xfs_e932364d-5f85-43fd-ba05-f4e0934482c2/nmstate-console-plugin/0.log" Jan 23 14:47:11 crc kubenswrapper[4775]: I0123 14:47:11.928767 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wmglj_18100557-00ef-4de8-9a7f-df953190a9c6/nmstate-handler/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.035640 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p7nxk_97726a36-cf4b-4688-b028-448734bd8c23/kube-rbac-proxy/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.046305 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p7nxk_97726a36-cf4b-4688-b028-448734bd8c23/nmstate-metrics/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.119338 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gq778_ebe0482d-2988-4f4d-929f-4c2980e19cf3/nmstate-operator/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.221663 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rnbff_6932e29c-8eac-4e0f-9516-c2e922655cbc/nmstate-webhook/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.072285 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7qz58_7755c0c4-4e11-47c6-955d-453408fd4316/kube-rbac-proxy/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.167339 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7qz58_7755c0c4-4e11-47c6-955d-453408fd4316/controller/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.272402 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.404290 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.416000 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.448883 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.455040 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.597392 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.622032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.631248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.644080 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.830691 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.850987 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.851879 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.861119 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/controller/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.063148 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/kube-rbac-proxy-frr/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.063928 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/kube-rbac-proxy/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.093599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/frr-metrics/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.238558 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/reloader/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.341969 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-p49hv_9eb8e4c8-06ce-427a-9b91-7b77d4e8a783/frr-k8s-webhook-server/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.465512 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558d9b5f8-fgs57_838b952f-6d05-4955-82fd-9cf8a017c5b5/manager/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.657860 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699f5544f9-66nkz_fa6cceac-c1d4-4e7c-9e60-4dd698abc182/webhook-server/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.808915 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x4gxj_9334cd3c-2410-4fbd-8cc1-14edca3afb92/kube-rbac-proxy/0.log" Jan 23 14:47:46 crc kubenswrapper[4775]: I0123 14:47:46.066325 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x4gxj_9334cd3c-2410-4fbd-8cc1-14edca3afb92/speaker/0.log" Jan 23 14:47:46 crc kubenswrapper[4775]: I0123 14:47:46.166541 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/frr/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.153599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.354557 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-api/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.604876 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.662404 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.874688 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.096063 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.145506 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.248862 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-metadata/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.404938 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.517604 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/mysql-bootstrap/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.734440 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/mysql-bootstrap/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.765127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.942782 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/mysql-bootstrap/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.209450 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.252025 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/mysql-bootstrap/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.424031 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.465601 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.476989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-api/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.613706 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.653918 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/setup-container/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.809680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/setup-container/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.866939 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.873071 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.009398 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.059773 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.062634 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.235019 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.259457 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.016588 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.218475 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.218858 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.346606 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.383830 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.402346 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.581237 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/extract/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.584944 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.633724 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.767032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.938652 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.989458 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.993446 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.138067 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.145844 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.181204 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/extract/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.303511 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.467381 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.474966 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.485217 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.801819 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.923385 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/extract/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.947407 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.022353 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.216795 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.234027 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.267603 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.408624 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.482955 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.607728 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.784729 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/registry-server/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.837202 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.879491 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.940965 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.043940 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.044425 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.211403 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-24s7d_ffa6638c-aaa0-418b-ad22-e5532ae16f68/marketplace-operator/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.392766 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/registry-server/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.414699 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.611000 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.626142 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.636055 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.788149 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.791661 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.889760 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.938032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/registry-server/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.026408 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.069118 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.096104 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.223886 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.226788 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.577329 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/registry-server/0.log" Jan 23 14:48:53 crc kubenswrapper[4775]: I0123 14:48:53.219010 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:48:53 crc kubenswrapper[4775]: I0123 14:48:53.221120 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.219521 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.220289 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.220362 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.221326 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.221645 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" gracePeriod=600 Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.722533 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" exitCode=0 Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728170 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728273 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.958206 4775 generic.go:334] "Generic (PLEG): container finished" podID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" exitCode=0 Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.958420 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerDied","Data":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.959841 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:45 crc kubenswrapper[4775]: I0123 14:49:45.712464 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/gather/0.log" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.230590 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.231648 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" containerID="cri-o://9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" gracePeriod=2 Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.243143 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.636553 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/copy/0.log" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.637938 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.722272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.722629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.731541 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4" (OuterVolumeSpecName: "kube-api-access-86lv4") pod "41dd897c-4a67-4a0a-a7a3-c17b6d05653d" (UID: "41dd897c-4a67-4a0a-a7a3-c17b6d05653d"). InnerVolumeSpecName "kube-api-access-86lv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.831158 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.907859 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "41dd897c-4a67-4a0a-a7a3-c17b6d05653d" (UID: "41dd897c-4a67-4a0a-a7a3-c17b6d05653d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.932940 4775 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.056693 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/copy/0.log" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057225 4775 generic.go:334] "Generic (PLEG): container finished" podID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" exitCode=143 Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057318 4775 scope.go:117] "RemoveContainer" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057352 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.088381 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.170447 4775 scope.go:117] "RemoveContainer" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: E0123 14:49:54.171478 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": container with ID starting with 9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47 not found: ID does not exist" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.171510 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47"} err="failed to get container status \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": rpc error: code = NotFound desc = could not find container \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": container with ID starting with 9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47 not found: ID does not exist" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.171530 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: E0123 14:49:54.172032 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": container with ID starting with 3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51 not found: ID does not exist" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.172062 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} err="failed to get container status \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": rpc error: code = NotFound desc = could not find container \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": container with ID starting with 3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51 not found: ID does not exist" Jan 23 14:49:55 crc kubenswrapper[4775]: I0123 14:49:55.731562 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" path="/var/lib/kubelet/pods/41dd897c-4a67-4a0a-a7a3-c17b6d05653d/volumes" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.894160 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895046 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895062 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895076 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895083 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895103 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895133 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895140 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895155 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895162 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895170 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895178 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895186 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895192 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895202 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895209 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895221 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895229 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895236 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895242 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895255 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895262 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895421 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895437 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895458 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895468 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895481 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.896811 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.931740 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995498 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995678 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.096679 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097399 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097677 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097971 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.115417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.218632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.659691 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.175719 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" exitCode=0 Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.175872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176"} Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.176214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"2e47c4962c77a254f758bcf21d44c4606a2440152efa666525d3e342f58f6a2c"} Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.178601 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:50:05 crc kubenswrapper[4775]: I0123 14:50:05.186795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} Jan 23 14:50:06 crc kubenswrapper[4775]: I0123 14:50:06.204931 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" exitCode=0 Jan 23 14:50:06 crc kubenswrapper[4775]: I0123 14:50:06.205023 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} Jan 23 14:50:07 crc kubenswrapper[4775]: I0123 14:50:07.215682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} Jan 23 14:50:07 crc kubenswrapper[4775]: I0123 14:50:07.241357 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl77g" podStartSLOduration=2.715737217 podStartE2EDuration="5.241339173s" podCreationTimestamp="2026-01-23 14:50:02 +0000 UTC" firstStartedPulling="2026-01-23 14:50:04.177835081 +0000 UTC m=+2751.172663851" lastFinishedPulling="2026-01-23 14:50:06.703437067 +0000 UTC m=+2753.698265807" observedRunningTime="2026-01-23 14:50:07.238523414 +0000 UTC m=+2754.233352164" watchObservedRunningTime="2026-01-23 14:50:07.241339173 +0000 UTC m=+2754.236167923" Jan 23 14:50:13 crc kubenswrapper[4775]: I0123 14:50:13.219125 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:13 crc kubenswrapper[4775]: I0123 14:50:13.219837 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:14 crc kubenswrapper[4775]: I0123 14:50:14.294142 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bl77g" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" probeResult="failure" output=< Jan 23 14:50:14 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:50:14 crc kubenswrapper[4775]: > Jan 23 14:50:23 crc kubenswrapper[4775]: I0123 14:50:23.295485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:23 crc kubenswrapper[4775]: I0123 14:50:23.366928 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:24 crc kubenswrapper[4775]: I0123 14:50:24.082413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:24 crc kubenswrapper[4775]: I0123 14:50:24.407121 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bl77g" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" containerID="cri-o://880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" gracePeriod=2 Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.004886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.123981 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.124098 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.124347 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.126018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities" (OuterVolumeSpecName: "utilities") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.131667 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82" (OuterVolumeSpecName: "kube-api-access-fdl82") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "kube-api-access-fdl82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.226703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.226742 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.255062 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.329006 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424202 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" exitCode=0 Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"2e47c4962c77a254f758bcf21d44c4606a2440152efa666525d3e342f58f6a2c"} Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424335 4775 scope.go:117] "RemoveContainer" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424524 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.478683 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.481399 4775 scope.go:117] "RemoveContainer" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.489440 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.518915 4775 scope.go:117] "RemoveContainer" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565149 4775 scope.go:117] "RemoveContainer" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.565879 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": container with ID starting with 880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302 not found: ID does not exist" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565920 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} err="failed to get container status \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": rpc error: code = NotFound desc = could not find container \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": container with ID starting with 880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565951 4775 scope.go:117] "RemoveContainer" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.566250 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": container with ID starting with 477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146 not found: ID does not exist" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566280 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} err="failed to get container status \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": rpc error: code = NotFound desc = could not find container \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": container with ID starting with 477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566298 4775 scope.go:117] "RemoveContainer" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.566740 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": container with ID starting with 86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176 not found: ID does not exist" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566771 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176"} err="failed to get container status \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": rpc error: code = NotFound desc = could not find container \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": container with ID starting with 86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.726633 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" path="/var/lib/kubelet/pods/a2c3db3a-a4f0-42e0-95dd-0098e860d77a/volumes" Jan 23 14:51:23 crc kubenswrapper[4775]: I0123 14:51:23.219177 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:51:23 crc kubenswrapper[4775]: I0123 14:51:23.219839 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:51:53 crc kubenswrapper[4775]: I0123 14:51:53.219469 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:51:53 crc kubenswrapper[4775]: I0123 14:51:53.220644 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.218841 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.222037 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.222330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.224280 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.224625 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" gracePeriod=600 Jan 23 14:52:23 crc kubenswrapper[4775]: E0123 14:52:23.359516 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.229997 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" exitCode=0 Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.230955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.231082 4775 scope.go:117] "RemoveContainer" containerID="fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.232018 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:24 crc kubenswrapper[4775]: E0123 14:52:24.232494 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:35 crc kubenswrapper[4775]: I0123 14:52:35.713853 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:35 crc kubenswrapper[4775]: E0123 14:52:35.715135 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:48 crc kubenswrapper[4775]: I0123 14:52:48.714688 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:48 crc kubenswrapper[4775]: E0123 14:52:48.715788 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.196916 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197636 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197649 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197667 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-content" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197672 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-content" Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197685 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-utilities" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197691 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-utilities" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197843 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.198371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.200499 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.200914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.246781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248398 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248497 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248524 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349679 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349822 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.356304 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.356533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.367339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.525341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.033144 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.631898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba"} Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.632280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f0146f8914dc95860df20fbac462375f9cb984677043cf48b9621229f25f5445"} Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.714506 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:02 crc kubenswrapper[4775]: E0123 14:53:02.714853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.664897 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" exitCode=2 Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.665009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba"} Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.666028 4775 scope.go:117] "RemoveContainer" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" Jan 23 14:53:07 crc kubenswrapper[4775]: I0123 14:53:07.679068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e"} Jan 23 14:53:07 crc kubenswrapper[4775]: I0123 14:53:07.710546 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podStartSLOduration=6.7105310320000005 podStartE2EDuration="6.710531032s" podCreationTimestamp="2026-01-23 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:53:02.647668998 +0000 UTC m=+2929.642497738" watchObservedRunningTime="2026-01-23 14:53:07.710531032 +0000 UTC m=+2934.705359772" Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.737412 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" exitCode=2 Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.737502 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e"} Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.740591 4775 scope.go:117] "RemoveContainer" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.741405 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:11 crc kubenswrapper[4775]: E0123 14:53:11.741842 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:18 crc kubenswrapper[4775]: I0123 14:53:18.492093 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:18 crc kubenswrapper[4775]: E0123 14:53:18.492756 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:22 crc kubenswrapper[4775]: I0123 14:53:22.713683 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:23 crc kubenswrapper[4775]: I0123 14:53:23.546688 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c"} Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.593680 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" exitCode=2 Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594095 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c"} Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594144 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594953 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:28 crc kubenswrapper[4775]: E0123 14:53:28.595408 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:29 crc kubenswrapper[4775]: I0123 14:53:29.713987 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:29 crc kubenswrapper[4775]: E0123 14:53:29.714754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:43 crc kubenswrapper[4775]: I0123 14:53:43.727490 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:43 crc kubenswrapper[4775]: I0123 14:53:43.728215 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:43 crc kubenswrapper[4775]: E0123 14:53:43.728558 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:43 crc kubenswrapper[4775]: E0123 14:53:43.728613 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:54 crc kubenswrapper[4775]: I0123 14:53:54.714505 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:54 crc kubenswrapper[4775]: E0123 14:53:54.715322 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:57 crc kubenswrapper[4775]: I0123 14:53:57.714507 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:58 crc kubenswrapper[4775]: I0123 14:53:58.880279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1"} Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927201 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" exitCode=2 Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1"} Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927697 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.928391 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:02 crc kubenswrapper[4775]: E0123 14:54:02.928666 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:06 crc kubenswrapper[4775]: I0123 14:54:06.713976 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:06 crc kubenswrapper[4775]: E0123 14:54:06.716079 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:15 crc kubenswrapper[4775]: I0123 14:54:15.721277 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:15 crc kubenswrapper[4775]: E0123 14:54:15.722401 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:20 crc kubenswrapper[4775]: I0123 14:54:20.714727 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:20 crc kubenswrapper[4775]: E0123 14:54:20.717384 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:26 crc kubenswrapper[4775]: I0123 14:54:26.714373 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:26 crc kubenswrapper[4775]: E0123 14:54:26.715469 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:33 crc kubenswrapper[4775]: I0123 14:54:33.721971 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:33 crc kubenswrapper[4775]: E0123 14:54:33.724845 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:38 crc kubenswrapper[4775]: I0123 14:54:38.714605 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:38 crc kubenswrapper[4775]: E0123 14:54:38.715288 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134705676024463 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134705677017401 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134677442016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134677442015472 5ustar corecore